00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1023 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3690 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.183 Using shallow fetch with depth 1 00:00:00.183 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.183 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.333 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.343 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.354 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.354 > git config core.sparsecheckout # timeout=10 00:00:06.363 > git read-tree -mu HEAD # timeout=10 00:00:06.378 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.403 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.403 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.506 [Pipeline] Start of Pipeline 00:00:06.521 [Pipeline] library 00:00:06.523 Loading library shm_lib@master 00:00:06.523 Library shm_lib@master is cached. Copying from home. 00:00:06.540 [Pipeline] node 00:00:06.550 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.551 [Pipeline] { 00:00:06.560 [Pipeline] catchError 00:00:06.562 [Pipeline] { 00:00:06.571 [Pipeline] wrap 00:00:06.578 [Pipeline] { 00:00:06.587 [Pipeline] stage 00:00:06.589 [Pipeline] { (Prologue) 00:00:06.610 [Pipeline] echo 00:00:06.611 Node: VM-host-SM0 00:00:06.617 [Pipeline] cleanWs 00:00:06.628 [WS-CLEANUP] Deleting project workspace... 00:00:06.628 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.634 [WS-CLEANUP] done 00:00:06.828 [Pipeline] setCustomBuildProperty 00:00:06.900 [Pipeline] httpRequest 00:00:07.534 [Pipeline] echo 00:00:07.537 Sorcerer 10.211.164.20 is alive 00:00:07.548 [Pipeline] retry 00:00:07.551 [Pipeline] { 00:00:07.568 [Pipeline] httpRequest 00:00:07.572 HttpMethod: GET 00:00:07.573 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.573 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.587 Response Code: HTTP/1.1 200 OK 00:00:07.587 Success: Status code 200 is in the accepted range: 200,404 00:00:07.588 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.722 [Pipeline] } 00:00:11.737 [Pipeline] // retry 00:00:11.744 [Pipeline] sh 00:00:12.025 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.037 [Pipeline] httpRequest 00:00:12.414 [Pipeline] echo 00:00:12.416 Sorcerer 10.211.164.20 is alive 00:00:12.424 [Pipeline] retry 00:00:12.426 [Pipeline] { 00:00:12.439 [Pipeline] httpRequest 00:00:12.442 HttpMethod: GET 00:00:12.442 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.443 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.444 Response Code: HTTP/1.1 200 OK 00:00:12.445 Success: Status code 200 is in the accepted range: 200,404 00:00:12.445 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:30.149 [Pipeline] } 00:00:30.166 [Pipeline] // retry 00:00:30.173 [Pipeline] sh 00:00:30.458 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:33.002 [Pipeline] sh 00:00:33.282 + git -C spdk log --oneline -n5 00:00:33.282 c13c99a5e test: Various fixes for Fedora40 00:00:33.282 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:33.282 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:33.282 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:33.282 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:33.303 [Pipeline] withCredentials 00:00:33.313 > git --version # timeout=10 00:00:33.325 > git --version # 'git version 2.39.2' 00:00:33.341 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.343 [Pipeline] { 00:00:33.353 [Pipeline] retry 00:00:33.355 [Pipeline] { 00:00:33.371 [Pipeline] sh 00:00:33.655 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:33.667 [Pipeline] } 00:00:33.686 [Pipeline] // retry 00:00:33.692 [Pipeline] } 00:00:33.709 [Pipeline] // withCredentials 00:00:33.721 [Pipeline] httpRequest 00:00:34.133 [Pipeline] echo 00:00:34.135 Sorcerer 10.211.164.20 is alive 00:00:34.146 [Pipeline] retry 00:00:34.149 [Pipeline] { 00:00:34.164 [Pipeline] httpRequest 00:00:34.170 HttpMethod: GET 00:00:34.171 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.171 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.183 Response Code: HTTP/1.1 200 OK 00:00:34.184 Success: Status code 200 is in the accepted range: 200,404 00:00:34.185 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:00.805 [Pipeline] } 00:01:00.822 [Pipeline] // retry 00:01:00.829 [Pipeline] sh 00:01:01.112 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:02.501 [Pipeline] sh 00:01:02.789 + git -C dpdk log --oneline -n5 00:01:02.789 eeb0605f11 version: 23.11.0 00:01:02.789 238778122a doc: update release notes for 23.11 00:01:02.789 46aa6b3cfc doc: fix description of RSS features 00:01:02.789 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:02.789 7e421ae345 devtools: support skipping forbid rule check 00:01:02.817 [Pipeline] writeFile 00:01:02.827 [Pipeline] sh 00:01:03.102 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:03.113 [Pipeline] sh 00:01:03.391 + cat autorun-spdk.conf 00:01:03.391 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.391 SPDK_TEST_NVMF=1 00:01:03.391 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.391 SPDK_TEST_USDT=1 00:01:03.391 SPDK_RUN_UBSAN=1 00:01:03.391 SPDK_TEST_NVMF_MDNS=1 00:01:03.391 NET_TYPE=virt 00:01:03.391 SPDK_JSONRPC_GO_CLIENT=1 00:01:03.391 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:03.391 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:03.391 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.398 RUN_NIGHTLY=1 00:01:03.399 [Pipeline] } 00:01:03.412 [Pipeline] // stage 00:01:03.426 [Pipeline] stage 00:01:03.428 [Pipeline] { (Run VM) 00:01:03.440 [Pipeline] sh 00:01:03.719 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:03.719 + echo 'Start stage prepare_nvme.sh' 00:01:03.719 Start stage prepare_nvme.sh 00:01:03.719 + [[ -n 4 ]] 00:01:03.719 + disk_prefix=ex4 00:01:03.719 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:03.719 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:03.719 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:03.719 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.719 ++ SPDK_TEST_NVMF=1 00:01:03.719 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.719 ++ SPDK_TEST_USDT=1 00:01:03.719 ++ SPDK_RUN_UBSAN=1 00:01:03.719 ++ SPDK_TEST_NVMF_MDNS=1 00:01:03.719 ++ NET_TYPE=virt 00:01:03.719 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:03.719 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:03.719 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:03.719 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.719 ++ RUN_NIGHTLY=1 00:01:03.719 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:03.719 + nvme_files=() 00:01:03.719 + declare -A nvme_files 00:01:03.719 + backend_dir=/var/lib/libvirt/images/backends 00:01:03.719 + nvme_files['nvme.img']=5G 00:01:03.719 + nvme_files['nvme-cmb.img']=5G 00:01:03.719 + nvme_files['nvme-multi0.img']=4G 00:01:03.719 + nvme_files['nvme-multi1.img']=4G 00:01:03.719 + nvme_files['nvme-multi2.img']=4G 00:01:03.719 + nvme_files['nvme-openstack.img']=8G 00:01:03.719 + nvme_files['nvme-zns.img']=5G 00:01:03.719 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:03.719 + (( SPDK_TEST_FTL == 1 )) 00:01:03.719 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:03.719 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:03.719 + for nvme in "${!nvme_files[@]}" 00:01:03.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:03.719 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.719 + for nvme in "${!nvme_files[@]}" 00:01:03.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:03.719 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.719 + for nvme in "${!nvme_files[@]}" 00:01:03.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:03.719 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:03.719 + for nvme in "${!nvme_files[@]}" 00:01:03.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:03.719 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.719 + for nvme in "${!nvme_files[@]}" 00:01:03.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:03.719 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.719 + for nvme in "${!nvme_files[@]}" 00:01:03.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:03.979 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.979 + for nvme in "${!nvme_files[@]}" 00:01:03.979 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:03.979 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.979 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:03.979 + echo 'End stage prepare_nvme.sh' 00:01:03.979 End stage prepare_nvme.sh 00:01:03.991 [Pipeline] sh 00:01:04.273 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:04.273 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:04.273 00:01:04.273 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:04.273 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:04.273 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:04.273 HELP=0 00:01:04.273 DRY_RUN=0 00:01:04.273 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:04.273 NVME_DISKS_TYPE=nvme,nvme, 00:01:04.273 NVME_AUTO_CREATE=0 00:01:04.273 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:04.273 NVME_CMB=,, 00:01:04.273 NVME_PMR=,, 00:01:04.273 NVME_ZNS=,, 00:01:04.273 NVME_MS=,, 00:01:04.273 NVME_FDP=,, 00:01:04.273 SPDK_VAGRANT_DISTRO=fedora39 00:01:04.273 SPDK_VAGRANT_VMCPU=10 00:01:04.273 SPDK_VAGRANT_VMRAM=12288 00:01:04.273 SPDK_VAGRANT_PROVIDER=libvirt 00:01:04.274 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:04.274 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:04.274 SPDK_OPENSTACK_NETWORK=0 00:01:04.274 VAGRANT_PACKAGE_BOX=0 00:01:04.274 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:04.274 FORCE_DISTRO=true 00:01:04.274 VAGRANT_BOX_VERSION= 00:01:04.274 EXTRA_VAGRANTFILES= 00:01:04.274 NIC_MODEL=e1000 00:01:04.274 00:01:04.274 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:04.274 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:07.563 Bringing machine 'default' up with 'libvirt' provider... 00:01:07.563 ==> default: Creating image (snapshot of base box volume). 00:01:07.822 ==> default: Creating domain with the following settings... 00:01:07.822 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733186119_97b00ed39bd28c4d43da 00:01:07.822 ==> default: -- Domain type: kvm 00:01:07.822 ==> default: -- Cpus: 10 00:01:07.822 ==> default: -- Feature: acpi 00:01:07.822 ==> default: -- Feature: apic 00:01:07.822 ==> default: -- Feature: pae 00:01:07.822 ==> default: -- Memory: 12288M 00:01:07.822 ==> default: -- Memory Backing: hugepages: 00:01:07.822 ==> default: -- Management MAC: 00:01:07.822 ==> default: -- Loader: 00:01:07.822 ==> default: -- Nvram: 00:01:07.822 ==> default: -- Base box: spdk/fedora39 00:01:07.822 ==> default: -- Storage pool: default 00:01:07.822 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733186119_97b00ed39bd28c4d43da.img (20G) 00:01:07.822 ==> default: -- Volume Cache: default 00:01:07.822 ==> default: -- Kernel: 00:01:07.822 ==> default: -- Initrd: 00:01:07.822 ==> default: -- Graphics Type: vnc 00:01:07.822 ==> default: -- Graphics Port: -1 00:01:07.822 ==> default: -- Graphics IP: 127.0.0.1 00:01:07.822 ==> default: -- Graphics Password: Not defined 00:01:07.822 ==> default: -- Video Type: cirrus 00:01:07.822 ==> default: -- Video VRAM: 9216 00:01:07.822 ==> default: -- Sound Type: 00:01:07.822 ==> default: -- Keymap: en-us 00:01:07.822 ==> default: -- TPM Path: 00:01:07.822 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:07.822 ==> default: -- Command line args: 00:01:07.822 ==> default: -> value=-device, 00:01:07.822 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:07.822 ==> default: -> value=-drive, 00:01:07.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:07.822 ==> default: -> value=-device, 00:01:07.822 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.822 ==> default: -> value=-device, 00:01:07.822 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:07.822 ==> default: -> value=-drive, 00:01:07.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:07.822 ==> default: -> value=-device, 00:01:07.822 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.822 ==> default: -> value=-drive, 00:01:07.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:07.822 ==> default: -> value=-device, 00:01:07.822 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.822 ==> default: -> value=-drive, 00:01:07.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:07.822 ==> default: -> value=-device, 00:01:07.822 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.081 ==> default: Creating shared folders metadata... 00:01:08.081 ==> default: Starting domain. 00:01:09.460 ==> default: Waiting for domain to get an IP address... 00:01:27.567 ==> default: Waiting for SSH to become available... 00:01:27.567 ==> default: Configuring and enabling network interfaces... 00:01:30.855 default: SSH address: 192.168.121.81:22 00:01:30.855 default: SSH username: vagrant 00:01:30.855 default: SSH auth method: private key 00:01:32.760 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.881 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:46.200 ==> default: Mounting SSHFS shared folder... 00:01:48.107 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:48.107 ==> default: Checking Mount.. 00:01:49.485 ==> default: Folder Successfully Mounted! 00:01:49.485 ==> default: Running provisioner: file... 00:01:50.050 default: ~/.gitconfig => .gitconfig 00:01:50.617 00:01:50.617 SUCCESS! 00:01:50.617 00:01:50.617 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:50.617 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:50.617 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:50.617 00:01:50.626 [Pipeline] } 00:01:50.639 [Pipeline] // stage 00:01:50.648 [Pipeline] dir 00:01:50.649 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:50.651 [Pipeline] { 00:01:50.663 [Pipeline] catchError 00:01:50.665 [Pipeline] { 00:01:50.678 [Pipeline] sh 00:01:50.956 + vagrant ssh-config --host vagrant 00:01:50.956 + sed -ne /^Host/,$p 00:01:50.956 + tee ssh_conf 00:01:53.488 Host vagrant 00:01:53.488 HostName 192.168.121.81 00:01:53.488 User vagrant 00:01:53.488 Port 22 00:01:53.488 UserKnownHostsFile /dev/null 00:01:53.488 StrictHostKeyChecking no 00:01:53.488 PasswordAuthentication no 00:01:53.488 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:53.488 IdentitiesOnly yes 00:01:53.488 LogLevel FATAL 00:01:53.488 ForwardAgent yes 00:01:53.488 ForwardX11 yes 00:01:53.488 00:01:53.500 [Pipeline] withEnv 00:01:53.502 [Pipeline] { 00:01:53.517 [Pipeline] sh 00:01:53.828 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:53.828 source /etc/os-release 00:01:53.828 [[ -e /image.version ]] && img=$(< /image.version) 00:01:53.828 # Minimal, systemd-like check. 00:01:53.828 if [[ -e /.dockerenv ]]; then 00:01:53.828 # Clear garbage from the node's name: 00:01:53.828 # agt-er_autotest_547-896 -> autotest_547-896 00:01:53.828 # $HOSTNAME is the actual container id 00:01:53.828 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:53.828 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:53.828 # We can assume this is a mount from a host where container is running, 00:01:53.828 # so fetch its hostname to easily identify the target swarm worker. 00:01:53.828 container="$(< /etc/hostname) ($agent)" 00:01:53.828 else 00:01:53.828 # Fallback 00:01:53.828 container=$agent 00:01:53.828 fi 00:01:53.828 fi 00:01:53.828 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:53.828 00:01:53.901 [Pipeline] } 00:01:53.917 [Pipeline] // withEnv 00:01:53.925 [Pipeline] setCustomBuildProperty 00:01:53.939 [Pipeline] stage 00:01:53.942 [Pipeline] { (Tests) 00:01:53.960 [Pipeline] sh 00:01:54.241 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:54.515 [Pipeline] sh 00:01:54.801 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:55.075 [Pipeline] timeout 00:01:55.076 Timeout set to expire in 1 hr 0 min 00:01:55.078 [Pipeline] { 00:01:55.094 [Pipeline] sh 00:01:55.378 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:55.946 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:55.959 [Pipeline] sh 00:01:56.240 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:56.513 [Pipeline] sh 00:01:56.795 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:57.071 [Pipeline] sh 00:01:57.352 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:57.611 ++ readlink -f spdk_repo 00:01:57.611 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:57.611 + [[ -n /home/vagrant/spdk_repo ]] 00:01:57.611 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:57.611 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:57.611 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:57.611 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:57.611 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:57.611 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:57.611 + cd /home/vagrant/spdk_repo 00:01:57.611 + source /etc/os-release 00:01:57.611 ++ NAME='Fedora Linux' 00:01:57.611 ++ VERSION='39 (Cloud Edition)' 00:01:57.611 ++ ID=fedora 00:01:57.611 ++ VERSION_ID=39 00:01:57.611 ++ VERSION_CODENAME= 00:01:57.611 ++ PLATFORM_ID=platform:f39 00:01:57.611 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:57.611 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:57.611 ++ LOGO=fedora-logo-icon 00:01:57.611 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:57.611 ++ HOME_URL=https://fedoraproject.org/ 00:01:57.611 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:57.611 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:57.611 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:57.611 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:57.611 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:57.611 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:57.611 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:57.611 ++ SUPPORT_END=2024-11-12 00:01:57.611 ++ VARIANT='Cloud Edition' 00:01:57.611 ++ VARIANT_ID=cloud 00:01:57.611 + uname -a 00:01:57.611 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:57.611 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:57.611 Hugepages 00:01:57.611 node hugesize free / total 00:01:57.611 node0 1048576kB 0 / 0 00:01:57.611 node0 2048kB 0 / 0 00:01:57.611 00:01:57.611 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.611 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:57.611 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:57.870 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:57.870 + rm -f /tmp/spdk-ld-path 00:01:57.870 + source autorun-spdk.conf 00:01:57.870 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.870 ++ SPDK_TEST_NVMF=1 00:01:57.870 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.870 ++ SPDK_TEST_USDT=1 00:01:57.870 ++ SPDK_RUN_UBSAN=1 00:01:57.870 ++ SPDK_TEST_NVMF_MDNS=1 00:01:57.870 ++ NET_TYPE=virt 00:01:57.870 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:57.870 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.870 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.870 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.870 ++ RUN_NIGHTLY=1 00:01:57.870 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.870 + [[ -n '' ]] 00:01:57.870 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:57.870 + for M in /var/spdk/build-*-manifest.txt 00:01:57.870 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.870 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.870 + for M in /var/spdk/build-*-manifest.txt 00:01:57.870 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.870 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.870 + for M in /var/spdk/build-*-manifest.txt 00:01:57.870 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.870 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.870 ++ uname 00:01:57.870 + [[ Linux == \L\i\n\u\x ]] 00:01:57.870 + sudo dmesg -T 00:01:57.870 + sudo dmesg --clear 00:01:57.870 + dmesg_pid=5961 00:01:57.870 + [[ Fedora Linux == FreeBSD ]] 00:01:57.870 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.870 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.870 + sudo dmesg -Tw 00:01:57.870 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.870 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.870 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.870 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.870 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.870 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.870 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.870 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.870 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.870 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.870 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.870 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.870 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:57.870 Test configuration: 00:01:57.870 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.870 SPDK_TEST_NVMF=1 00:01:57.870 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.870 SPDK_TEST_USDT=1 00:01:57.870 SPDK_RUN_UBSAN=1 00:01:57.870 SPDK_TEST_NVMF_MDNS=1 00:01:57.870 NET_TYPE=virt 00:01:57.870 SPDK_JSONRPC_GO_CLIENT=1 00:01:57.870 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.870 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.870 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.870 RUN_NIGHTLY=1 00:36:10 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:57.870 00:36:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:57.870 00:36:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.870 00:36:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.870 00:36:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.870 00:36:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.870 00:36:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.870 00:36:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.870 00:36:10 -- paths/export.sh@5 -- $ export PATH 00:01:57.871 00:36:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.871 00:36:10 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:57.871 00:36:10 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:57.871 00:36:10 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733186170.XXXXXX 00:01:58.130 00:36:10 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733186170.TBJ4Af 00:01:58.130 00:36:10 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:58.130 00:36:10 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:58.130 00:36:10 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:58.130 00:36:10 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:58.130 00:36:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:58.130 00:36:10 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:58.130 00:36:10 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:58.130 00:36:10 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:58.130 00:36:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.130 00:36:10 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:01:58.130 00:36:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.130 00:36:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.130 00:36:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.130 00:36:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.130 Tue Dec 3 12:36:10 AM UTC 2024 00:01:58.130 00:36:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.130 LTS-67-gc13c99a5e 00:01:58.130 00:36:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.130 00:36:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.130 00:36:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.130 00:36:10 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:58.130 00:36:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.130 00:36:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.130 ************************************ 00:01:58.130 START TEST ubsan 00:01:58.130 ************************************ 00:01:58.130 using ubsan 00:01:58.130 00:36:10 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:58.130 00:01:58.130 real 0m0.000s 00:01:58.131 user 0m0.000s 00:01:58.131 sys 0m0.000s 00:01:58.131 00:36:10 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:58.131 ************************************ 00:01:58.131 END TEST ubsan 00:01:58.131 ************************************ 00:01:58.131 00:36:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.131 00:36:10 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:58.131 00:36:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:58.131 00:36:10 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:58.131 00:36:10 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:58.131 00:36:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.131 00:36:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.131 ************************************ 00:01:58.131 START TEST build_native_dpdk 00:01:58.131 ************************************ 00:01:58.131 00:36:10 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:58.131 00:36:10 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:58.131 00:36:10 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:58.131 00:36:10 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:58.131 00:36:10 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:58.131 00:36:10 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:58.131 00:36:10 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:58.131 00:36:10 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:58.131 00:36:10 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:58.131 00:36:10 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:58.131 00:36:10 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:58.131 00:36:10 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:58.131 00:36:10 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:58.131 00:36:10 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:58.131 00:36:10 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:58.131 00:36:10 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:58.131 00:36:10 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:58.131 00:36:10 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:58.131 eeb0605f11 version: 23.11.0 00:01:58.131 238778122a doc: update release notes for 23.11 00:01:58.131 46aa6b3cfc doc: fix description of RSS features 00:01:58.131 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:58.131 7e421ae345 devtools: support skipping forbid rule check 00:01:58.131 00:36:10 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:58.131 00:36:10 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:58.131 00:36:10 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:58.131 00:36:10 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:58.131 00:36:10 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:58.131 00:36:10 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:58.131 00:36:10 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:58.131 00:36:10 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:58.131 00:36:10 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:58.131 00:36:10 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:58.131 00:36:10 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:58.131 00:36:10 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:58.131 00:36:10 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:58.131 00:36:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:58.131 00:36:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:58.131 00:36:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:58.131 00:36:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:58.131 00:36:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.131 00:36:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:58.131 00:36:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:58.131 00:36:10 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:58.131 00:36:10 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:58.131 00:36:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:58.131 00:36:10 -- scripts/common.sh@343 -- $ case "$op" in 00:01:58.131 00:36:10 -- scripts/common.sh@344 -- $ : 1 00:01:58.131 00:36:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:58.131 00:36:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.131 00:36:10 -- scripts/common.sh@364 -- $ decimal 23 00:01:58.131 00:36:10 -- scripts/common.sh@352 -- $ local d=23 00:01:58.131 00:36:10 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.131 00:36:10 -- scripts/common.sh@354 -- $ echo 23 00:01:58.131 00:36:10 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:58.131 00:36:10 -- scripts/common.sh@365 -- $ decimal 21 00:01:58.131 00:36:10 -- scripts/common.sh@352 -- $ local d=21 00:01:58.131 00:36:10 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:58.131 00:36:10 -- scripts/common.sh@354 -- $ echo 21 00:01:58.131 00:36:10 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:58.131 00:36:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:58.131 00:36:10 -- scripts/common.sh@366 -- $ return 1 00:01:58.131 00:36:10 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:58.131 patching file config/rte_config.h 00:01:58.131 Hunk #1 succeeded at 60 (offset 1 line). 00:01:58.131 00:36:10 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:58.131 00:36:10 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:58.131 00:36:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:58.131 00:36:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:58.131 00:36:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:58.131 00:36:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:58.131 00:36:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.131 00:36:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:58.131 00:36:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:58.131 00:36:10 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:58.131 00:36:10 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:58.131 00:36:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:58.131 00:36:10 -- scripts/common.sh@343 -- $ case "$op" in 00:01:58.131 00:36:10 -- scripts/common.sh@344 -- $ : 1 00:01:58.131 00:36:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:58.131 00:36:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.131 00:36:10 -- scripts/common.sh@364 -- $ decimal 23 00:01:58.131 00:36:10 -- scripts/common.sh@352 -- $ local d=23 00:01:58.131 00:36:10 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.131 00:36:10 -- scripts/common.sh@354 -- $ echo 23 00:01:58.131 00:36:10 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:58.131 00:36:10 -- scripts/common.sh@365 -- $ decimal 24 00:01:58.131 00:36:10 -- scripts/common.sh@352 -- $ local d=24 00:01:58.131 00:36:10 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:58.131 00:36:10 -- scripts/common.sh@354 -- $ echo 24 00:01:58.131 00:36:10 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:58.131 00:36:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:58.131 00:36:10 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:58.131 00:36:10 -- scripts/common.sh@367 -- $ return 0 00:01:58.131 00:36:10 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:58.131 patching file lib/pcapng/rte_pcapng.c 00:01:58.131 00:36:10 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:58.131 00:36:10 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:58.131 00:36:10 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:58.131 00:36:10 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:58.131 00:36:10 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:04.699 The Meson build system 00:02:04.699 Version: 1.5.0 00:02:04.699 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:04.699 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:04.699 Build type: native build 00:02:04.699 Program cat found: YES (/usr/bin/cat) 00:02:04.699 Project name: DPDK 00:02:04.699 Project version: 23.11.0 00:02:04.699 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.699 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:04.699 Host machine cpu family: x86_64 00:02:04.699 Host machine cpu: x86_64 00:02:04.699 Message: ## Building in Developer Mode ## 00:02:04.699 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.700 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:04.700 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.700 Program python3 found: YES (/usr/bin/python3) 00:02:04.700 Program cat found: YES (/usr/bin/cat) 00:02:04.700 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:04.700 Compiler for C supports arguments -march=native: YES 00:02:04.700 Checking for size of "void *" : 8 00:02:04.700 Checking for size of "void *" : 8 (cached) 00:02:04.700 Library m found: YES 00:02:04.700 Library numa found: YES 00:02:04.700 Has header "numaif.h" : YES 00:02:04.700 Library fdt found: NO 00:02:04.700 Library execinfo found: NO 00:02:04.700 Has header "execinfo.h" : YES 00:02:04.700 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.700 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.700 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.700 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.700 Run-time dependency openssl found: YES 3.1.1 00:02:04.700 Run-time dependency libpcap found: YES 1.10.4 00:02:04.700 Has header "pcap.h" with dependency libpcap: YES 00:02:04.700 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.700 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.700 Compiler for C supports arguments -Wformat: YES 00:02:04.700 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.700 Compiler for C supports arguments -Wformat-security: NO 00:02:04.700 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.700 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.700 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.700 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.700 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.700 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.700 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.700 Compiler for C supports arguments -Wundef: YES 00:02:04.700 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.700 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.700 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.700 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.700 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.700 Program objdump found: YES (/usr/bin/objdump) 00:02:04.700 Compiler for C supports arguments -mavx512f: YES 00:02:04.700 Checking if "AVX512 checking" compiles: YES 00:02:04.700 Fetching value of define "__SSE4_2__" : 1 00:02:04.700 Fetching value of define "__AES__" : 1 00:02:04.700 Fetching value of define "__AVX__" : 1 00:02:04.700 Fetching value of define "__AVX2__" : 1 00:02:04.700 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.700 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.700 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.700 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.700 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.700 Fetching value of define "__PCLMUL__" : 1 00:02:04.700 Fetching value of define "__RDRND__" : 1 00:02:04.700 Fetching value of define "__RDSEED__" : 1 00:02:04.700 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.700 Fetching value of define "__znver1__" : (undefined) 00:02:04.700 Fetching value of define "__znver2__" : (undefined) 00:02:04.700 Fetching value of define "__znver3__" : (undefined) 00:02:04.700 Fetching value of define "__znver4__" : (undefined) 00:02:04.700 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.700 Message: lib/log: Defining dependency "log" 00:02:04.700 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.700 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.700 Checking for function "getentropy" : NO 00:02:04.700 Message: lib/eal: Defining dependency "eal" 00:02:04.700 Message: lib/ring: Defining dependency "ring" 00:02:04.700 Message: lib/rcu: Defining dependency "rcu" 00:02:04.700 Message: lib/mempool: Defining dependency "mempool" 00:02:04.700 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.700 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.700 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.700 Compiler for C supports arguments -mpclmul: YES 00:02:04.700 Compiler for C supports arguments -maes: YES 00:02:04.700 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.700 Compiler for C supports arguments -mavx512bw: YES 00:02:04.700 Compiler for C supports arguments -mavx512dq: YES 00:02:04.700 Compiler for C supports arguments -mavx512vl: YES 00:02:04.700 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.700 Compiler for C supports arguments -mavx2: YES 00:02:04.700 Compiler for C supports arguments -mavx: YES 00:02:04.700 Message: lib/net: Defining dependency "net" 00:02:04.700 Message: lib/meter: Defining dependency "meter" 00:02:04.700 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.700 Message: lib/pci: Defining dependency "pci" 00:02:04.700 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.700 Message: lib/metrics: Defining dependency "metrics" 00:02:04.700 Message: lib/hash: Defining dependency "hash" 00:02:04.700 Message: lib/timer: Defining dependency "timer" 00:02:04.700 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.700 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:04.700 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:04.700 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:04.700 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:04.700 Message: lib/acl: Defining dependency "acl" 00:02:04.700 Message: lib/bbdev: Defining dependency "bbdev" 00:02:04.700 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:04.700 Run-time dependency libelf found: YES 0.191 00:02:04.700 Message: lib/bpf: Defining dependency "bpf" 00:02:04.700 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:04.700 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.700 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.700 Message: lib/distributor: Defining dependency "distributor" 00:02:04.700 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.700 Message: lib/efd: Defining dependency "efd" 00:02:04.700 Message: lib/eventdev: Defining dependency "eventdev" 00:02:04.700 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:04.700 Message: lib/gpudev: Defining dependency "gpudev" 00:02:04.700 Message: lib/gro: Defining dependency "gro" 00:02:04.700 Message: lib/gso: Defining dependency "gso" 00:02:04.700 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:04.700 Message: lib/jobstats: Defining dependency "jobstats" 00:02:04.700 Message: lib/latencystats: Defining dependency "latencystats" 00:02:04.700 Message: lib/lpm: Defining dependency "lpm" 00:02:04.700 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.700 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:04.700 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:04.700 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:04.700 Message: lib/member: Defining dependency "member" 00:02:04.700 Message: lib/pcapng: Defining dependency "pcapng" 00:02:04.700 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.700 Message: lib/power: Defining dependency "power" 00:02:04.700 Message: lib/rawdev: Defining dependency "rawdev" 00:02:04.700 Message: lib/regexdev: Defining dependency "regexdev" 00:02:04.700 Message: lib/mldev: Defining dependency "mldev" 00:02:04.700 Message: lib/rib: Defining dependency "rib" 00:02:04.700 Message: lib/reorder: Defining dependency "reorder" 00:02:04.700 Message: lib/sched: Defining dependency "sched" 00:02:04.700 Message: lib/security: Defining dependency "security" 00:02:04.700 Message: lib/stack: Defining dependency "stack" 00:02:04.700 Has header "linux/userfaultfd.h" : YES 00:02:04.700 Has header "linux/vduse.h" : YES 00:02:04.700 Message: lib/vhost: Defining dependency "vhost" 00:02:04.700 Message: lib/ipsec: Defining dependency "ipsec" 00:02:04.700 Message: lib/pdcp: Defining dependency "pdcp" 00:02:04.700 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.700 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:04.700 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:04.700 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:04.700 Message: lib/fib: Defining dependency "fib" 00:02:04.700 Message: lib/port: Defining dependency "port" 00:02:04.700 Message: lib/pdump: Defining dependency "pdump" 00:02:04.700 Message: lib/table: Defining dependency "table" 00:02:04.700 Message: lib/pipeline: Defining dependency "pipeline" 00:02:04.700 Message: lib/graph: Defining dependency "graph" 00:02:04.700 Message: lib/node: Defining dependency "node" 00:02:04.700 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.268 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.268 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.268 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.268 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:05.268 Compiler for C supports arguments -Wno-unused-value: YES 00:02:05.268 Compiler for C supports arguments -Wno-format: YES 00:02:05.268 Compiler for C supports arguments -Wno-format-security: YES 00:02:05.268 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:05.268 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:05.268 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:05.268 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:05.268 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:05.268 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.268 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:05.268 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:05.268 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:05.268 Has header "sys/epoll.h" : YES 00:02:05.268 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.268 Configuring doxy-api-html.conf using configuration 00:02:05.268 Configuring doxy-api-man.conf using configuration 00:02:05.268 Program mandb found: YES (/usr/bin/mandb) 00:02:05.268 Program sphinx-build found: NO 00:02:05.268 Configuring rte_build_config.h using configuration 00:02:05.268 Message: 00:02:05.268 ================= 00:02:05.268 Applications Enabled 00:02:05.268 ================= 00:02:05.268 00:02:05.268 apps: 00:02:05.268 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:05.268 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:05.268 test-pmd, test-regex, test-sad, test-security-perf, 00:02:05.268 00:02:05.268 Message: 00:02:05.268 ================= 00:02:05.268 Libraries Enabled 00:02:05.268 ================= 00:02:05.268 00:02:05.268 libs: 00:02:05.268 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.268 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:05.268 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:05.268 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:05.268 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:05.268 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:05.268 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:05.268 00:02:05.268 00:02:05.268 Message: 00:02:05.268 =============== 00:02:05.268 Drivers Enabled 00:02:05.268 =============== 00:02:05.268 00:02:05.268 common: 00:02:05.268 00:02:05.268 bus: 00:02:05.268 pci, vdev, 00:02:05.268 mempool: 00:02:05.268 ring, 00:02:05.268 dma: 00:02:05.268 00:02:05.268 net: 00:02:05.268 i40e, 00:02:05.268 raw: 00:02:05.268 00:02:05.268 crypto: 00:02:05.268 00:02:05.268 compress: 00:02:05.268 00:02:05.268 regex: 00:02:05.268 00:02:05.268 ml: 00:02:05.268 00:02:05.268 vdpa: 00:02:05.268 00:02:05.268 event: 00:02:05.268 00:02:05.269 baseband: 00:02:05.269 00:02:05.269 gpu: 00:02:05.269 00:02:05.269 00:02:05.269 Message: 00:02:05.269 ================= 00:02:05.269 Content Skipped 00:02:05.269 ================= 00:02:05.269 00:02:05.269 apps: 00:02:05.269 00:02:05.269 libs: 00:02:05.269 00:02:05.269 drivers: 00:02:05.269 common/cpt: not in enabled drivers build config 00:02:05.269 common/dpaax: not in enabled drivers build config 00:02:05.269 common/iavf: not in enabled drivers build config 00:02:05.269 common/idpf: not in enabled drivers build config 00:02:05.269 common/mvep: not in enabled drivers build config 00:02:05.269 common/octeontx: not in enabled drivers build config 00:02:05.269 bus/auxiliary: not in enabled drivers build config 00:02:05.269 bus/cdx: not in enabled drivers build config 00:02:05.269 bus/dpaa: not in enabled drivers build config 00:02:05.269 bus/fslmc: not in enabled drivers build config 00:02:05.269 bus/ifpga: not in enabled drivers build config 00:02:05.269 bus/platform: not in enabled drivers build config 00:02:05.269 bus/vmbus: not in enabled drivers build config 00:02:05.269 common/cnxk: not in enabled drivers build config 00:02:05.269 common/mlx5: not in enabled drivers build config 00:02:05.269 common/nfp: not in enabled drivers build config 00:02:05.269 common/qat: not in enabled drivers build config 00:02:05.269 common/sfc_efx: not in enabled drivers build config 00:02:05.269 mempool/bucket: not in enabled drivers build config 00:02:05.269 mempool/cnxk: not in enabled drivers build config 00:02:05.269 mempool/dpaa: not in enabled drivers build config 00:02:05.269 mempool/dpaa2: not in enabled drivers build config 00:02:05.269 mempool/octeontx: not in enabled drivers build config 00:02:05.269 mempool/stack: not in enabled drivers build config 00:02:05.269 dma/cnxk: not in enabled drivers build config 00:02:05.269 dma/dpaa: not in enabled drivers build config 00:02:05.269 dma/dpaa2: not in enabled drivers build config 00:02:05.269 dma/hisilicon: not in enabled drivers build config 00:02:05.269 dma/idxd: not in enabled drivers build config 00:02:05.269 dma/ioat: not in enabled drivers build config 00:02:05.269 dma/skeleton: not in enabled drivers build config 00:02:05.269 net/af_packet: not in enabled drivers build config 00:02:05.269 net/af_xdp: not in enabled drivers build config 00:02:05.269 net/ark: not in enabled drivers build config 00:02:05.269 net/atlantic: not in enabled drivers build config 00:02:05.269 net/avp: not in enabled drivers build config 00:02:05.269 net/axgbe: not in enabled drivers build config 00:02:05.269 net/bnx2x: not in enabled drivers build config 00:02:05.269 net/bnxt: not in enabled drivers build config 00:02:05.269 net/bonding: not in enabled drivers build config 00:02:05.269 net/cnxk: not in enabled drivers build config 00:02:05.269 net/cpfl: not in enabled drivers build config 00:02:05.269 net/cxgbe: not in enabled drivers build config 00:02:05.269 net/dpaa: not in enabled drivers build config 00:02:05.269 net/dpaa2: not in enabled drivers build config 00:02:05.269 net/e1000: not in enabled drivers build config 00:02:05.269 net/ena: not in enabled drivers build config 00:02:05.269 net/enetc: not in enabled drivers build config 00:02:05.269 net/enetfec: not in enabled drivers build config 00:02:05.269 net/enic: not in enabled drivers build config 00:02:05.269 net/failsafe: not in enabled drivers build config 00:02:05.269 net/fm10k: not in enabled drivers build config 00:02:05.269 net/gve: not in enabled drivers build config 00:02:05.269 net/hinic: not in enabled drivers build config 00:02:05.269 net/hns3: not in enabled drivers build config 00:02:05.269 net/iavf: not in enabled drivers build config 00:02:05.269 net/ice: not in enabled drivers build config 00:02:05.269 net/idpf: not in enabled drivers build config 00:02:05.269 net/igc: not in enabled drivers build config 00:02:05.269 net/ionic: not in enabled drivers build config 00:02:05.269 net/ipn3ke: not in enabled drivers build config 00:02:05.269 net/ixgbe: not in enabled drivers build config 00:02:05.269 net/mana: not in enabled drivers build config 00:02:05.269 net/memif: not in enabled drivers build config 00:02:05.269 net/mlx4: not in enabled drivers build config 00:02:05.269 net/mlx5: not in enabled drivers build config 00:02:05.269 net/mvneta: not in enabled drivers build config 00:02:05.269 net/mvpp2: not in enabled drivers build config 00:02:05.269 net/netvsc: not in enabled drivers build config 00:02:05.269 net/nfb: not in enabled drivers build config 00:02:05.269 net/nfp: not in enabled drivers build config 00:02:05.269 net/ngbe: not in enabled drivers build config 00:02:05.269 net/null: not in enabled drivers build config 00:02:05.269 net/octeontx: not in enabled drivers build config 00:02:05.269 net/octeon_ep: not in enabled drivers build config 00:02:05.269 net/pcap: not in enabled drivers build config 00:02:05.269 net/pfe: not in enabled drivers build config 00:02:05.269 net/qede: not in enabled drivers build config 00:02:05.269 net/ring: not in enabled drivers build config 00:02:05.269 net/sfc: not in enabled drivers build config 00:02:05.269 net/softnic: not in enabled drivers build config 00:02:05.269 net/tap: not in enabled drivers build config 00:02:05.269 net/thunderx: not in enabled drivers build config 00:02:05.269 net/txgbe: not in enabled drivers build config 00:02:05.269 net/vdev_netvsc: not in enabled drivers build config 00:02:05.269 net/vhost: not in enabled drivers build config 00:02:05.269 net/virtio: not in enabled drivers build config 00:02:05.269 net/vmxnet3: not in enabled drivers build config 00:02:05.269 raw/cnxk_bphy: not in enabled drivers build config 00:02:05.269 raw/cnxk_gpio: not in enabled drivers build config 00:02:05.269 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:05.269 raw/ifpga: not in enabled drivers build config 00:02:05.269 raw/ntb: not in enabled drivers build config 00:02:05.269 raw/skeleton: not in enabled drivers build config 00:02:05.269 crypto/armv8: not in enabled drivers build config 00:02:05.269 crypto/bcmfs: not in enabled drivers build config 00:02:05.269 crypto/caam_jr: not in enabled drivers build config 00:02:05.269 crypto/ccp: not in enabled drivers build config 00:02:05.269 crypto/cnxk: not in enabled drivers build config 00:02:05.269 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.269 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.269 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.269 crypto/mlx5: not in enabled drivers build config 00:02:05.269 crypto/mvsam: not in enabled drivers build config 00:02:05.269 crypto/nitrox: not in enabled drivers build config 00:02:05.269 crypto/null: not in enabled drivers build config 00:02:05.269 crypto/octeontx: not in enabled drivers build config 00:02:05.269 crypto/openssl: not in enabled drivers build config 00:02:05.269 crypto/scheduler: not in enabled drivers build config 00:02:05.269 crypto/uadk: not in enabled drivers build config 00:02:05.269 crypto/virtio: not in enabled drivers build config 00:02:05.269 compress/isal: not in enabled drivers build config 00:02:05.269 compress/mlx5: not in enabled drivers build config 00:02:05.269 compress/octeontx: not in enabled drivers build config 00:02:05.269 compress/zlib: not in enabled drivers build config 00:02:05.269 regex/mlx5: not in enabled drivers build config 00:02:05.269 regex/cn9k: not in enabled drivers build config 00:02:05.269 ml/cnxk: not in enabled drivers build config 00:02:05.269 vdpa/ifc: not in enabled drivers build config 00:02:05.269 vdpa/mlx5: not in enabled drivers build config 00:02:05.269 vdpa/nfp: not in enabled drivers build config 00:02:05.269 vdpa/sfc: not in enabled drivers build config 00:02:05.269 event/cnxk: not in enabled drivers build config 00:02:05.269 event/dlb2: not in enabled drivers build config 00:02:05.269 event/dpaa: not in enabled drivers build config 00:02:05.269 event/dpaa2: not in enabled drivers build config 00:02:05.269 event/dsw: not in enabled drivers build config 00:02:05.269 event/opdl: not in enabled drivers build config 00:02:05.269 event/skeleton: not in enabled drivers build config 00:02:05.269 event/sw: not in enabled drivers build config 00:02:05.269 event/octeontx: not in enabled drivers build config 00:02:05.269 baseband/acc: not in enabled drivers build config 00:02:05.269 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:05.269 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:05.269 baseband/la12xx: not in enabled drivers build config 00:02:05.269 baseband/null: not in enabled drivers build config 00:02:05.269 baseband/turbo_sw: not in enabled drivers build config 00:02:05.269 gpu/cuda: not in enabled drivers build config 00:02:05.269 00:02:05.269 00:02:05.269 Build targets in project: 220 00:02:05.269 00:02:05.269 DPDK 23.11.0 00:02:05.269 00:02:05.269 User defined options 00:02:05.269 libdir : lib 00:02:05.269 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:05.269 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:05.269 c_link_args : 00:02:05.269 enable_docs : false 00:02:05.269 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:05.269 enable_kmods : false 00:02:05.269 machine : native 00:02:05.269 tests : false 00:02:05.269 00:02:05.269 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.269 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:05.528 00:36:17 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:05.528 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:05.528 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.528 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.528 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.528 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.786 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.786 [6/710] Linking static target lib/librte_kvargs.a 00:02:05.786 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.786 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.786 [9/710] Linking static target lib/librte_log.a 00:02:05.786 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.786 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.045 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.045 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.303 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.303 [15/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:06.303 [16/710] Linking target lib/librte_log.so.24.0 00:02:06.303 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.303 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:06.562 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.562 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.562 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.562 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.562 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:06.562 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:06.821 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:06.821 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.821 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.821 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.821 [29/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:06.821 [30/710] Linking static target lib/librte_telemetry.a 00:02:06.821 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:07.079 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:07.079 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:07.079 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.079 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.338 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.338 [37/710] Linking target lib/librte_telemetry.so.24.0 00:02:07.338 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.338 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.338 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.338 [41/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:07.338 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.338 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.597 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.597 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.855 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.855 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.855 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.855 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.855 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.855 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:08.114 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.114 [53/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.114 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.114 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.114 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:08.114 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.372 [58/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.372 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.372 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.372 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.372 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.372 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.372 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.630 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.630 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.630 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.630 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.888 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.888 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.888 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.889 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.889 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.889 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.889 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.147 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:09.147 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:09.147 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:09.147 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.407 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.666 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.666 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.666 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.666 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.666 [85/710] Linking static target lib/librte_ring.a 00:02:09.666 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.666 [87/710] Linking static target lib/librte_eal.a 00:02:09.925 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.925 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.925 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.925 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.925 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.925 [93/710] Linking static target lib/librte_mempool.a 00:02:10.184 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.184 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.184 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.184 [97/710] Linking static target lib/librte_rcu.a 00:02:10.443 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.443 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.443 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.443 [101/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.701 [102/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.701 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.701 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.701 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.701 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.959 [107/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.959 [108/710] Linking static target lib/librte_net.a 00:02:10.959 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.959 [110/710] Linking static target lib/librte_mbuf.a 00:02:11.218 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.218 [112/710] Linking static target lib/librte_meter.a 00:02:11.218 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.218 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.218 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.218 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.218 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.218 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.476 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.042 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.042 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.042 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.301 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.301 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.301 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.301 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.301 [127/710] Linking static target lib/librte_pci.a 00:02:12.301 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.559 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.559 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.559 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.559 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.559 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.559 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.559 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.826 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.826 [137/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.826 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.826 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.826 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.826 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.826 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.085 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.085 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.085 [145/710] Linking static target lib/librte_cmdline.a 00:02:13.342 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.342 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:13.342 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:13.342 [149/710] Linking static target lib/librte_metrics.a 00:02:13.342 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.601 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.859 [152/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.859 [153/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.859 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.859 [155/710] Linking static target lib/librte_timer.a 00:02:14.117 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.387 [157/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:14.387 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:14.690 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:14.690 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:14.960 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.219 [162/710] Linking static target lib/librte_ethdev.a 00:02:15.219 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:15.219 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:15.219 [165/710] Linking static target lib/librte_bitratestats.a 00:02:15.219 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:15.478 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:15.478 [168/710] Linking static target lib/librte_bbdev.a 00:02:15.478 [169/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.478 [170/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.478 [171/710] Linking target lib/librte_eal.so.24.0 00:02:15.737 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:15.737 [173/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:15.737 [174/710] Linking static target lib/librte_hash.a 00:02:15.737 [175/710] Linking target lib/librte_ring.so.24.0 00:02:15.737 [176/710] Linking target lib/librte_meter.so.24.0 00:02:15.737 [177/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:15.737 [178/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:15.737 [179/710] Linking target lib/librte_pci.so.24.0 00:02:15.737 [180/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:15.737 [181/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:15.737 [182/710] Linking target lib/librte_rcu.so.24.0 00:02:15.737 [183/710] Linking target lib/librte_timer.so.24.0 00:02:15.737 [184/710] Linking target lib/librte_mempool.so.24.0 00:02:15.995 [185/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:15.995 [186/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:15.995 [187/710] Linking static target lib/acl/libavx2_tmp.a 00:02:15.995 [188/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.995 [189/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:15.995 [190/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:15.995 [191/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:15.995 [192/710] Linking static target lib/acl/libavx512_tmp.a 00:02:15.995 [193/710] Linking target lib/librte_mbuf.so.24.0 00:02:15.995 [194/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:15.995 [195/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:16.254 [196/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:16.254 [197/710] Linking target lib/librte_net.so.24.0 00:02:16.254 [198/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.254 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:16.254 [200/710] Linking target lib/librte_bbdev.so.24.0 00:02:16.254 [201/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:16.254 [202/710] Linking static target lib/librte_acl.a 00:02:16.254 [203/710] Linking target lib/librte_hash.so.24.0 00:02:16.254 [204/710] Linking target lib/librte_cmdline.so.24.0 00:02:16.513 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:16.513 [206/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:16.513 [207/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:16.513 [208/710] Linking static target lib/librte_cfgfile.a 00:02:16.513 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.772 [210/710] Linking target lib/librte_acl.so.24.0 00:02:16.772 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:16.772 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:16.772 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:16.772 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:16.772 [215/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.031 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:02:17.031 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.031 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:17.290 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.290 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:17.290 [221/710] Linking static target lib/librte_bpf.a 00:02:17.290 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.548 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.548 [224/710] Linking static target lib/librte_compressdev.a 00:02:17.548 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.548 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.548 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:17.807 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:17.807 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:17.807 [230/710] Linking static target lib/librte_distributor.a 00:02:18.067 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.067 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.067 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:18.067 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.067 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.067 [236/710] Linking static target lib/librte_dmadev.a 00:02:18.067 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:18.067 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:18.325 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.325 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:18.584 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:18.584 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:18.843 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:18.844 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:18.844 [245/710] Linking static target lib/librte_efd.a 00:02:19.102 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:19.102 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.102 [248/710] Linking static target lib/librte_cryptodev.a 00:02:19.102 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.102 [250/710] Linking target lib/librte_efd.so.24.0 00:02:19.361 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:19.361 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.361 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:19.620 [254/710] Linking static target lib/librte_dispatcher.a 00:02:19.620 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:19.620 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:19.620 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:19.878 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:19.878 [259/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:19.878 [260/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:19.878 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:19.878 [262/710] Linking static target lib/librte_gpudev.a 00:02:19.878 [263/710] Linking target lib/librte_bpf.so.24.0 00:02:19.878 [264/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.878 [265/710] Linking target lib/librte_bitratestats.so.24.0 00:02:19.878 [266/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:19.878 [267/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:20.137 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:20.137 [269/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:20.137 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.397 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:20.397 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:20.397 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:20.655 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.655 [275/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.655 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:20.655 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:20.655 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.655 [279/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.655 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.655 [281/710] Linking static target lib/librte_gro.a 00:02:20.913 [282/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.913 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.913 [284/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:20.913 [285/710] Linking static target lib/librte_eventdev.a 00:02:20.913 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.913 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.913 [288/710] Linking target lib/librte_gro.so.24.0 00:02:21.170 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:21.170 [290/710] Linking static target lib/librte_gso.a 00:02:21.170 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:21.428 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:21.428 [293/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.428 [294/710] Linking target lib/librte_gso.so.24.0 00:02:21.428 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:21.428 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:21.686 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:21.686 [298/710] Linking static target lib/librte_jobstats.a 00:02:21.686 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:21.686 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:21.686 [301/710] Linking static target lib/librte_ip_frag.a 00:02:21.686 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:21.686 [303/710] Linking static target lib/librte_latencystats.a 00:02:21.944 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.944 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:21.944 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.944 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.944 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:02:21.944 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:21.944 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:22.203 [311/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:22.203 [312/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:22.203 [313/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:22.203 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:22.203 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.203 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.203 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.769 [318/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:22.769 [319/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:22.769 [320/710] Linking static target lib/librte_lpm.a 00:02:22.769 [321/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.769 [322/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.769 [323/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.769 [324/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:22.769 [325/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.028 [326/710] Linking target lib/librte_eventdev.so.24.0 00:02:23.028 [327/710] Linking static target lib/librte_pcapng.a 00:02:23.028 [328/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:23.028 [329/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.028 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.028 [331/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:23.028 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:23.028 [333/710] Linking target lib/librte_dispatcher.so.24.0 00:02:23.028 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.028 [335/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:23.028 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:23.286 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:23.286 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:23.286 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:23.544 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:23.544 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:23.544 [342/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:23.544 [343/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:23.544 [344/710] Linking static target lib/librte_regexdev.a 00:02:23.544 [345/710] Linking static target lib/librte_member.a 00:02:23.544 [346/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:23.544 [347/710] Linking static target lib/librte_power.a 00:02:23.801 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:23.801 [349/710] Linking static target lib/librte_rawdev.a 00:02:23.801 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:23.801 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:23.801 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:23.801 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.058 [354/710] Linking target lib/librte_member.so.24.0 00:02:24.058 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:24.058 [356/710] Linking static target lib/librte_mldev.a 00:02:24.058 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:24.058 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.316 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:24.316 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.316 [361/710] Linking target lib/librte_power.so.24.0 00:02:24.316 [362/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.316 [363/710] Linking target lib/librte_regexdev.so.24.0 00:02:24.316 [364/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:24.576 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:24.576 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.576 [367/710] Linking static target lib/librte_reorder.a 00:02:24.576 [368/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:24.576 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.576 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:24.835 [371/710] Linking static target lib/librte_rib.a 00:02:24.835 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:24.835 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:24.835 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.835 [375/710] Linking target lib/librte_reorder.so.24.0 00:02:25.094 [376/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.094 [377/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.094 [378/710] Linking static target lib/librte_security.a 00:02:25.094 [379/710] Linking static target lib/librte_stack.a 00:02:25.094 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:25.094 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.094 [382/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.094 [383/710] Linking target lib/librte_mldev.so.24.0 00:02:25.094 [384/710] Linking target lib/librte_rib.so.24.0 00:02:25.094 [385/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.094 [386/710] Linking target lib/librte_stack.so.24.0 00:02:25.353 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:25.353 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.353 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.353 [390/710] Linking target lib/librte_security.so.24.0 00:02:25.611 [391/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:25.611 [392/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.611 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.611 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:25.611 [395/710] Linking static target lib/librte_sched.a 00:02:25.871 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.130 [397/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.130 [398/710] Linking target lib/librte_sched.so.24.0 00:02:26.130 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.130 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:26.388 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.388 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:26.647 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:26.647 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.647 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:26.907 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:26.907 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:27.166 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.166 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:27.166 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:27.166 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:27.166 [412/710] Linking static target lib/librte_ipsec.a 00:02:27.166 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:27.425 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:27.425 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:27.425 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.684 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:27.684 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:27.684 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:27.684 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:27.684 [421/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:27.684 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:27.684 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:28.620 [424/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:28.620 [425/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:28.620 [426/710] Linking static target lib/librte_pdcp.a 00:02:28.620 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:28.620 [428/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:28.620 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:28.620 [430/710] Linking static target lib/librte_fib.a 00:02:28.620 [431/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:28.620 [432/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:28.878 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.878 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:28.878 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.878 [436/710] Linking target lib/librte_fib.so.24.0 00:02:29.138 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:29.397 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:29.397 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:29.397 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:29.656 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:29.656 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:29.656 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:29.656 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:29.916 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:29.916 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:29.916 [447/710] Linking static target lib/librte_port.a 00:02:30.175 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:30.175 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:30.175 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:30.175 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:30.434 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.434 [453/710] Linking target lib/librte_port.so.24.0 00:02:30.434 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:30.434 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:30.694 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:30.694 [457/710] Linking static target lib/librte_pdump.a 00:02:30.694 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:30.694 [459/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:30.694 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.694 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:30.952 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:31.211 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:31.211 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:31.469 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:31.470 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:31.470 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:31.470 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:31.728 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:31.728 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:31.728 [471/710] Linking static target lib/librte_table.a 00:02:31.728 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:31.728 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:32.296 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:32.296 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.296 [476/710] Linking target lib/librte_table.so.24.0 00:02:32.555 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:32.555 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:32.555 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:32.813 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:32.813 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:33.071 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:33.071 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:33.071 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:33.330 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:33.330 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:33.588 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:33.588 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:33.846 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:33.846 [490/710] Linking static target lib/librte_graph.a 00:02:33.846 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:33.846 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:33.846 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:34.414 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:34.414 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.414 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:34.414 [497/710] Linking target lib/librte_graph.so.24.0 00:02:34.414 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:34.673 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:34.932 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:34.932 [501/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:34.932 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:34.932 [503/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:34.932 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:34.932 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.191 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:35.451 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:35.451 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:35.710 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.710 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.710 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.710 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:35.710 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.710 [514/710] Linking static target lib/librte_node.a 00:02:35.969 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.969 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.969 [517/710] Linking target lib/librte_node.so.24.0 00:02:36.228 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.229 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.229 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.229 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.488 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.488 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.488 [524/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.488 [525/710] Linking static target drivers/librte_bus_vdev.a 00:02:36.488 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.488 [527/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.488 [528/710] Linking static target drivers/librte_bus_pci.a 00:02:36.747 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:36.747 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.747 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:36.747 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:36.747 [533/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.747 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:36.747 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.747 [536/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.747 [537/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:37.006 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.006 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:37.006 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:37.006 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.006 [542/710] Linking static target drivers/librte_mempool_ring.a 00:02:37.006 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.006 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:37.006 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:37.266 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:37.525 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:37.783 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:37.783 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:38.042 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:38.042 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:38.610 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:38.869 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:38.869 [554/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:38.869 [555/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:38.869 [556/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:38.869 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:39.128 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:39.128 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:39.387 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:39.646 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:39.646 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:39.906 [563/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:40.166 [564/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:40.166 [565/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:40.425 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:40.684 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:40.684 [568/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.684 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:40.684 [570/710] Linking static target lib/librte_vhost.a 00:02:40.684 [571/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:40.684 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:40.684 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:40.684 [574/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:40.969 [575/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:41.228 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:41.228 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:41.228 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:41.228 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:41.487 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:41.487 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:41.487 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:41.747 [583/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.747 [584/710] Linking target lib/librte_vhost.so.24.0 00:02:41.747 [585/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:41.747 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:41.747 [587/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.747 [588/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.747 [589/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:41.747 [590/710] Linking static target drivers/librte_net_i40e.a 00:02:41.747 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:42.006 [592/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:42.006 [593/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:42.281 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:42.281 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.577 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:42.577 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:42.577 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:42.577 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:42.845 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:43.103 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:43.103 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:43.103 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:43.362 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:43.362 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:43.362 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:43.362 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:43.930 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:43.930 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:43.930 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:43.930 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:43.930 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:43.930 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:44.189 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:44.189 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:44.189 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:44.189 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:44.448 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:44.707 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:44.707 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:44.967 [621/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:44.967 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:44.967 [623/710] Linking static target lib/librte_pipeline.a 00:02:44.967 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:45.227 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:45.486 [626/710] Linking target app/dpdk-dumpcap 00:02:45.486 [627/710] Linking target app/dpdk-graph 00:02:45.745 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:45.745 [629/710] Linking target app/dpdk-pdump 00:02:45.745 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:46.004 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:46.004 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:46.004 [633/710] Linking target app/dpdk-proc-info 00:02:46.004 [634/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:46.263 [635/710] Linking target app/dpdk-test-acl 00:02:46.263 [636/710] Linking target app/dpdk-test-cmdline 00:02:46.263 [637/710] Linking target app/dpdk-test-compress-perf 00:02:46.263 [638/710] Linking target app/dpdk-test-crypto-perf 00:02:46.263 [639/710] Linking target app/dpdk-test-dma-perf 00:02:46.831 [640/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:46.831 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:46.831 [642/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:46.831 [643/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:46.831 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:46.831 [645/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:47.090 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:47.349 [647/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:47.349 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:47.349 [649/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.349 [650/710] Linking target app/dpdk-test-gpudev 00:02:47.349 [651/710] Linking target lib/librte_pipeline.so.24.0 00:02:47.349 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:47.608 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:47.608 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:47.608 [655/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:47.608 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:47.608 [657/710] Linking target app/dpdk-test-fib 00:02:47.867 [658/710] Linking target app/dpdk-test-eventdev 00:02:47.867 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:47.867 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:48.126 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:48.126 [662/710] Linking target app/dpdk-test-flow-perf 00:02:48.126 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:48.126 [664/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:48.385 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:48.385 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:48.385 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:48.385 [668/710] Linking target app/dpdk-test-bbdev 00:02:48.644 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:48.644 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:48.644 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:48.644 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:48.644 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:49.213 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:49.213 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:49.213 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:49.213 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:49.213 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:49.472 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:49.472 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:49.731 [681/710] Linking target app/dpdk-test-pipeline 00:02:49.731 [682/710] Linking target app/dpdk-test-mldev 00:02:49.990 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:50.249 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:50.249 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:50.249 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:50.249 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:50.508 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:50.508 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:50.767 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:50.767 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:51.025 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:51.025 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:51.284 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:51.543 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:51.543 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:51.802 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:51.802 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:51.802 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:52.061 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:52.061 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:52.061 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:52.320 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:52.320 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:52.320 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:52.320 [706/710] Linking target app/dpdk-test-regex 00:02:52.320 [707/710] Linking target app/dpdk-test-sad 00:02:52.886 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:52.886 [709/710] Linking target app/dpdk-testpmd 00:02:53.144 [710/710] Linking target app/dpdk-test-security-perf 00:02:53.144 00:37:05 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:53.144 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:53.144 [0/1] Installing files. 00:02:53.405 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.405 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.406 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.407 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.407 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.407 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.666 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.925 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.925 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.925 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.925 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.925 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.925 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.927 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.928 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.187 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:54.188 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:54.188 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:54.188 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:54.188 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:54.188 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:54.188 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:54.188 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:54.188 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:54.188 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:54.188 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:54.188 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:54.188 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:54.188 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:54.188 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:54.188 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:54.188 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:54.188 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:54.188 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:54.188 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:54.188 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:54.188 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:54.188 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:54.188 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:54.188 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:54.188 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:54.188 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:54.188 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:54.188 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:54.188 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:54.188 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:54.188 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:54.188 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:54.188 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:54.188 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:54.188 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:54.188 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:54.188 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:54.188 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:54.188 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:54.188 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:54.188 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:54.188 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:54.188 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:54.188 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:54.188 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:54.188 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:54.188 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:54.188 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:54.188 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:54.188 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:54.188 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:54.188 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:54.188 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:54.188 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:54.188 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:54.188 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:54.188 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:54.188 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:54.188 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:54.188 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:54.188 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:54.188 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:54.188 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:54.188 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:54.188 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:54.188 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:54.188 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:54.188 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:54.188 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:54.188 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:54.188 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:54.188 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:54.188 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:54.188 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:54.188 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:54.188 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:54.188 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:54.188 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:54.188 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:54.188 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:54.188 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:54.188 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:54.188 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:54.188 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:54.188 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:54.188 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:54.189 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:54.189 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:54.189 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:54.189 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:54.189 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:54.189 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:54.189 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:54.189 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:54.189 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:54.189 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:54.189 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:54.189 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:54.189 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:54.189 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:54.189 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:54.189 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:54.189 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:54.189 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:54.189 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:54.189 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:54.189 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:54.189 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:54.189 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:54.189 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:54.189 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:54.189 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:54.189 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:54.189 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:54.189 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:54.189 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:54.189 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:54.189 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:54.189 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:54.189 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:54.189 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:54.189 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:54.189 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:54.189 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:54.189 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:54.189 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:54.189 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:54.189 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:54.189 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:54.189 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:54.189 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:54.189 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:54.189 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:54.189 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:54.189 00:37:06 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:54.189 00:37:06 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:54.189 00:37:06 -- common/autobuild_common.sh@203 -- $ cat 00:02:54.189 00:37:06 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:54.189 00:02:54.189 real 0m56.067s 00:02:54.189 user 6m38.123s 00:02:54.189 sys 1m7.721s 00:02:54.189 00:37:06 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:54.189 00:37:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.189 ************************************ 00:02:54.189 END TEST build_native_dpdk 00:02:54.189 ************************************ 00:02:54.189 00:37:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:54.189 00:37:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:54.189 00:37:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:54.189 00:37:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:54.189 00:37:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:54.189 00:37:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:54.189 00:37:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:54.189 00:37:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:02:54.447 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:54.447 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.447 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:54.447 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:55.013 Using 'verbs' RDMA provider 00:03:10.457 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:22.660 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:22.918 go version go1.21.1 linux/amd64 00:03:23.483 Creating mk/config.mk...done. 00:03:23.483 Creating mk/cc.flags.mk...done. 00:03:23.483 Type 'make' to build. 00:03:23.483 00:37:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:23.483 00:37:35 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:23.483 00:37:35 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:23.483 00:37:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.483 ************************************ 00:03:23.483 START TEST make 00:03:23.483 ************************************ 00:03:23.483 00:37:35 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:23.740 make[1]: Nothing to be done for 'all'. 00:03:45.709 CC lib/ut/ut.o 00:03:45.709 CC lib/log/log.o 00:03:45.709 CC lib/log/log_flags.o 00:03:45.709 CC lib/log/log_deprecated.o 00:03:45.709 CC lib/ut_mock/mock.o 00:03:45.709 LIB libspdk_ut_mock.a 00:03:45.709 LIB libspdk_ut.a 00:03:45.709 LIB libspdk_log.a 00:03:45.709 SO libspdk_ut_mock.so.5.0 00:03:45.709 SO libspdk_ut.so.1.0 00:03:45.709 SO libspdk_log.so.6.1 00:03:45.709 SYMLINK libspdk_ut_mock.so 00:03:45.709 SYMLINK libspdk_ut.so 00:03:45.709 SYMLINK libspdk_log.so 00:03:45.709 CXX lib/trace_parser/trace.o 00:03:45.709 CC lib/ioat/ioat.o 00:03:45.709 CC lib/dma/dma.o 00:03:45.709 CC lib/util/base64.o 00:03:45.709 CC lib/util/bit_array.o 00:03:45.709 CC lib/util/cpuset.o 00:03:45.709 CC lib/util/crc16.o 00:03:45.709 CC lib/util/crc32.o 00:03:45.709 CC lib/util/crc32c.o 00:03:45.709 CC lib/vfio_user/host/vfio_user_pci.o 00:03:45.709 CC lib/util/crc32_ieee.o 00:03:45.709 CC lib/vfio_user/host/vfio_user.o 00:03:45.709 CC lib/util/crc64.o 00:03:45.709 CC lib/util/dif.o 00:03:45.709 CC lib/util/fd.o 00:03:45.709 LIB libspdk_dma.a 00:03:45.709 SO libspdk_dma.so.3.0 00:03:45.709 CC lib/util/file.o 00:03:45.709 LIB libspdk_ioat.a 00:03:45.709 CC lib/util/hexlify.o 00:03:45.709 SO libspdk_ioat.so.6.0 00:03:45.709 CC lib/util/iov.o 00:03:45.709 SYMLINK libspdk_dma.so 00:03:45.709 CC lib/util/math.o 00:03:45.709 CC lib/util/pipe.o 00:03:45.709 CC lib/util/strerror_tls.o 00:03:45.709 LIB libspdk_vfio_user.a 00:03:45.709 SYMLINK libspdk_ioat.so 00:03:45.709 CC lib/util/string.o 00:03:45.709 SO libspdk_vfio_user.so.4.0 00:03:45.709 CC lib/util/uuid.o 00:03:45.709 CC lib/util/fd_group.o 00:03:45.968 SYMLINK libspdk_vfio_user.so 00:03:45.969 CC lib/util/xor.o 00:03:45.969 CC lib/util/zipf.o 00:03:46.228 LIB libspdk_util.a 00:03:46.228 SO libspdk_util.so.8.0 00:03:46.228 LIB libspdk_trace_parser.a 00:03:46.228 SYMLINK libspdk_util.so 00:03:46.228 SO libspdk_trace_parser.so.4.0 00:03:46.487 CC lib/env_dpdk/env.o 00:03:46.487 SYMLINK libspdk_trace_parser.so 00:03:46.487 CC lib/env_dpdk/memory.o 00:03:46.487 CC lib/json/json_parse.o 00:03:46.487 CC lib/json/json_util.o 00:03:46.487 CC lib/idxd/idxd.o 00:03:46.487 CC lib/rdma/common.o 00:03:46.487 CC lib/conf/conf.o 00:03:46.487 CC lib/env_dpdk/pci.o 00:03:46.487 CC lib/rdma/rdma_verbs.o 00:03:46.487 CC lib/vmd/vmd.o 00:03:46.747 CC lib/vmd/led.o 00:03:46.747 LIB libspdk_conf.a 00:03:46.747 CC lib/env_dpdk/init.o 00:03:46.747 CC lib/json/json_write.o 00:03:46.747 SO libspdk_conf.so.5.0 00:03:46.747 LIB libspdk_rdma.a 00:03:46.747 SO libspdk_rdma.so.5.0 00:03:46.747 SYMLINK libspdk_conf.so 00:03:46.747 CC lib/idxd/idxd_user.o 00:03:46.747 SYMLINK libspdk_rdma.so 00:03:46.747 CC lib/idxd/idxd_kernel.o 00:03:46.747 CC lib/env_dpdk/threads.o 00:03:46.747 CC lib/env_dpdk/pci_ioat.o 00:03:47.006 LIB libspdk_json.a 00:03:47.006 CC lib/env_dpdk/pci_virtio.o 00:03:47.006 CC lib/env_dpdk/pci_vmd.o 00:03:47.006 CC lib/env_dpdk/pci_idxd.o 00:03:47.006 CC lib/env_dpdk/pci_event.o 00:03:47.006 SO libspdk_json.so.5.1 00:03:47.006 LIB libspdk_idxd.a 00:03:47.006 SYMLINK libspdk_json.so 00:03:47.006 CC lib/env_dpdk/sigbus_handler.o 00:03:47.006 CC lib/env_dpdk/pci_dpdk.o 00:03:47.006 SO libspdk_idxd.so.11.0 00:03:47.006 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.006 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.006 SYMLINK libspdk_idxd.so 00:03:47.006 LIB libspdk_vmd.a 00:03:47.006 SO libspdk_vmd.so.5.0 00:03:47.006 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.006 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.006 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.265 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.265 SYMLINK libspdk_vmd.so 00:03:47.265 LIB libspdk_jsonrpc.a 00:03:47.524 SO libspdk_jsonrpc.so.5.1 00:03:47.524 SYMLINK libspdk_jsonrpc.so 00:03:47.524 LIB libspdk_env_dpdk.a 00:03:47.524 CC lib/rpc/rpc.o 00:03:47.782 SO libspdk_env_dpdk.so.13.0 00:03:47.782 LIB libspdk_rpc.a 00:03:47.782 SYMLINK libspdk_env_dpdk.so 00:03:47.782 SO libspdk_rpc.so.5.0 00:03:47.782 SYMLINK libspdk_rpc.so 00:03:48.040 CC lib/trace/trace.o 00:03:48.040 CC lib/trace/trace_flags.o 00:03:48.040 CC lib/trace/trace_rpc.o 00:03:48.040 CC lib/sock/sock.o 00:03:48.040 CC lib/notify/notify.o 00:03:48.040 CC lib/sock/sock_rpc.o 00:03:48.040 CC lib/notify/notify_rpc.o 00:03:48.299 LIB libspdk_notify.a 00:03:48.299 SO libspdk_notify.so.5.0 00:03:48.299 SYMLINK libspdk_notify.so 00:03:48.299 LIB libspdk_trace.a 00:03:48.299 SO libspdk_trace.so.9.0 00:03:48.299 LIB libspdk_sock.a 00:03:48.558 SYMLINK libspdk_trace.so 00:03:48.558 SO libspdk_sock.so.8.0 00:03:48.558 SYMLINK libspdk_sock.so 00:03:48.558 CC lib/thread/iobuf.o 00:03:48.558 CC lib/thread/thread.o 00:03:48.837 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.837 CC lib/nvme/nvme_fabric.o 00:03:48.837 CC lib/nvme/nvme_ctrlr.o 00:03:48.837 CC lib/nvme/nvme_ns_cmd.o 00:03:48.837 CC lib/nvme/nvme_qpair.o 00:03:48.837 CC lib/nvme/nvme_pcie.o 00:03:48.837 CC lib/nvme/nvme_ns.o 00:03:48.837 CC lib/nvme/nvme_pcie_common.o 00:03:48.837 CC lib/nvme/nvme.o 00:03:49.406 CC lib/nvme/nvme_quirks.o 00:03:49.406 CC lib/nvme/nvme_transport.o 00:03:49.406 CC lib/nvme/nvme_discovery.o 00:03:49.665 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.665 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.665 CC lib/nvme/nvme_tcp.o 00:03:49.665 CC lib/nvme/nvme_opal.o 00:03:49.665 CC lib/nvme/nvme_io_msg.o 00:03:49.923 CC lib/nvme/nvme_poll_group.o 00:03:49.923 CC lib/nvme/nvme_zns.o 00:03:50.181 CC lib/nvme/nvme_cuse.o 00:03:50.181 CC lib/nvme/nvme_vfio_user.o 00:03:50.181 CC lib/nvme/nvme_rdma.o 00:03:50.181 LIB libspdk_thread.a 00:03:50.181 SO libspdk_thread.so.9.0 00:03:50.181 SYMLINK libspdk_thread.so 00:03:50.440 CC lib/init/json_config.o 00:03:50.440 CC lib/blob/blobstore.o 00:03:50.440 CC lib/accel/accel.o 00:03:50.440 CC lib/accel/accel_rpc.o 00:03:50.698 CC lib/init/subsystem.o 00:03:50.698 CC lib/virtio/virtio.o 00:03:50.698 CC lib/init/subsystem_rpc.o 00:03:50.698 CC lib/init/rpc.o 00:03:50.698 CC lib/blob/request.o 00:03:50.698 CC lib/blob/zeroes.o 00:03:50.957 CC lib/blob/blob_bs_dev.o 00:03:50.957 LIB libspdk_init.a 00:03:50.957 SO libspdk_init.so.4.0 00:03:50.957 CC lib/accel/accel_sw.o 00:03:50.957 CC lib/virtio/virtio_vhost_user.o 00:03:50.957 CC lib/virtio/virtio_vfio_user.o 00:03:50.957 SYMLINK libspdk_init.so 00:03:50.957 CC lib/virtio/virtio_pci.o 00:03:50.957 CC lib/event/app.o 00:03:50.957 CC lib/event/log_rpc.o 00:03:50.957 CC lib/event/reactor.o 00:03:51.215 CC lib/event/app_rpc.o 00:03:51.215 CC lib/event/scheduler_static.o 00:03:51.215 LIB libspdk_virtio.a 00:03:51.215 SO libspdk_virtio.so.6.0 00:03:51.215 SYMLINK libspdk_virtio.so 00:03:51.473 LIB libspdk_nvme.a 00:03:51.473 LIB libspdk_accel.a 00:03:51.473 SO libspdk_accel.so.14.0 00:03:51.473 LIB libspdk_event.a 00:03:51.474 SYMLINK libspdk_accel.so 00:03:51.474 SO libspdk_event.so.12.0 00:03:51.474 SO libspdk_nvme.so.12.0 00:03:51.474 SYMLINK libspdk_event.so 00:03:51.732 CC lib/bdev/bdev.o 00:03:51.732 CC lib/bdev/bdev_rpc.o 00:03:51.732 CC lib/bdev/bdev_zone.o 00:03:51.732 CC lib/bdev/scsi_nvme.o 00:03:51.733 CC lib/bdev/part.o 00:03:51.733 SYMLINK libspdk_nvme.so 00:03:53.108 LIB libspdk_blob.a 00:03:53.108 SO libspdk_blob.so.10.1 00:03:53.108 SYMLINK libspdk_blob.so 00:03:53.108 CC lib/blobfs/tree.o 00:03:53.108 CC lib/blobfs/blobfs.o 00:03:53.366 CC lib/lvol/lvol.o 00:03:53.934 LIB libspdk_blobfs.a 00:03:53.934 SO libspdk_blobfs.so.9.0 00:03:53.934 LIB libspdk_bdev.a 00:03:53.934 SO libspdk_bdev.so.14.0 00:03:53.934 SYMLINK libspdk_blobfs.so 00:03:54.194 SYMLINK libspdk_bdev.so 00:03:54.194 LIB libspdk_lvol.a 00:03:54.194 SO libspdk_lvol.so.9.1 00:03:54.194 SYMLINK libspdk_lvol.so 00:03:54.194 CC lib/nvmf/ctrlr_discovery.o 00:03:54.194 CC lib/nvmf/ctrlr.o 00:03:54.194 CC lib/nbd/nbd.o 00:03:54.194 CC lib/nbd/nbd_rpc.o 00:03:54.194 CC lib/nvmf/ctrlr_bdev.o 00:03:54.194 CC lib/nvmf/subsystem.o 00:03:54.194 CC lib/nvmf/nvmf.o 00:03:54.194 CC lib/scsi/dev.o 00:03:54.194 CC lib/ublk/ublk.o 00:03:54.194 CC lib/ftl/ftl_core.o 00:03:54.453 CC lib/scsi/lun.o 00:03:54.453 CC lib/scsi/port.o 00:03:54.453 LIB libspdk_nbd.a 00:03:54.713 SO libspdk_nbd.so.6.0 00:03:54.713 CC lib/scsi/scsi.o 00:03:54.713 SYMLINK libspdk_nbd.so 00:03:54.713 CC lib/ftl/ftl_init.o 00:03:54.713 CC lib/ftl/ftl_layout.o 00:03:54.713 CC lib/ftl/ftl_debug.o 00:03:54.713 CC lib/ftl/ftl_io.o 00:03:54.713 CC lib/scsi/scsi_bdev.o 00:03:54.713 CC lib/ftl/ftl_sb.o 00:03:54.973 CC lib/ublk/ublk_rpc.o 00:03:54.973 CC lib/nvmf/nvmf_rpc.o 00:03:54.973 CC lib/nvmf/transport.o 00:03:54.973 CC lib/nvmf/tcp.o 00:03:54.973 CC lib/ftl/ftl_l2p.o 00:03:54.973 CC lib/ftl/ftl_l2p_flat.o 00:03:54.973 CC lib/scsi/scsi_pr.o 00:03:54.973 LIB libspdk_ublk.a 00:03:54.973 SO libspdk_ublk.so.2.0 00:03:55.233 SYMLINK libspdk_ublk.so 00:03:55.233 CC lib/scsi/scsi_rpc.o 00:03:55.233 CC lib/ftl/ftl_nv_cache.o 00:03:55.233 CC lib/scsi/task.o 00:03:55.233 CC lib/ftl/ftl_band.o 00:03:55.233 CC lib/nvmf/rdma.o 00:03:55.233 CC lib/ftl/ftl_band_ops.o 00:03:55.233 CC lib/ftl/ftl_writer.o 00:03:55.492 LIB libspdk_scsi.a 00:03:55.492 SO libspdk_scsi.so.8.0 00:03:55.492 SYMLINK libspdk_scsi.so 00:03:55.492 CC lib/ftl/ftl_rq.o 00:03:55.492 CC lib/ftl/ftl_reloc.o 00:03:55.492 CC lib/ftl/ftl_l2p_cache.o 00:03:55.492 CC lib/ftl/ftl_p2l.o 00:03:55.492 CC lib/ftl/mngt/ftl_mngt.o 00:03:55.492 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:55.751 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:55.751 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:55.751 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:55.751 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:55.751 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:55.751 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:56.010 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:56.010 CC lib/iscsi/conn.o 00:03:56.010 CC lib/iscsi/init_grp.o 00:03:56.010 CC lib/iscsi/iscsi.o 00:03:56.010 CC lib/iscsi/md5.o 00:03:56.010 CC lib/iscsi/param.o 00:03:56.010 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:56.010 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:56.269 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:56.269 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:56.269 CC lib/iscsi/portal_grp.o 00:03:56.269 CC lib/iscsi/tgt_node.o 00:03:56.269 CC lib/iscsi/iscsi_subsystem.o 00:03:56.269 CC lib/iscsi/iscsi_rpc.o 00:03:56.269 CC lib/iscsi/task.o 00:03:56.528 CC lib/vhost/vhost.o 00:03:56.528 CC lib/ftl/utils/ftl_conf.o 00:03:56.528 CC lib/vhost/vhost_rpc.o 00:03:56.528 CC lib/vhost/vhost_scsi.o 00:03:56.528 CC lib/vhost/vhost_blk.o 00:03:56.528 CC lib/vhost/rte_vhost_user.o 00:03:56.788 CC lib/ftl/utils/ftl_md.o 00:03:56.788 CC lib/ftl/utils/ftl_mempool.o 00:03:56.788 CC lib/ftl/utils/ftl_bitmap.o 00:03:56.788 CC lib/ftl/utils/ftl_property.o 00:03:56.788 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.048 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.048 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.048 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.048 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:57.048 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:57.307 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:57.307 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:57.307 LIB libspdk_iscsi.a 00:03:57.307 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:57.307 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:57.307 LIB libspdk_nvmf.a 00:03:57.307 CC lib/ftl/base/ftl_base_dev.o 00:03:57.307 SO libspdk_iscsi.so.7.0 00:03:57.307 CC lib/ftl/base/ftl_base_bdev.o 00:03:57.307 CC lib/ftl/ftl_trace.o 00:03:57.567 SO libspdk_nvmf.so.17.0 00:03:57.567 SYMLINK libspdk_iscsi.so 00:03:57.567 LIB libspdk_vhost.a 00:03:57.567 SYMLINK libspdk_nvmf.so 00:03:57.567 LIB libspdk_ftl.a 00:03:57.567 SO libspdk_vhost.so.7.1 00:03:57.826 SYMLINK libspdk_vhost.so 00:03:57.826 SO libspdk_ftl.so.8.0 00:03:58.085 SYMLINK libspdk_ftl.so 00:03:58.344 CC module/env_dpdk/env_dpdk_rpc.o 00:03:58.344 CC module/blob/bdev/blob_bdev.o 00:03:58.344 CC module/scheduler/gscheduler/gscheduler.o 00:03:58.344 CC module/accel/ioat/accel_ioat.o 00:03:58.344 CC module/accel/dsa/accel_dsa.o 00:03:58.344 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:58.345 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:58.345 CC module/sock/posix/posix.o 00:03:58.345 CC module/accel/error/accel_error.o 00:03:58.345 CC module/accel/iaa/accel_iaa.o 00:03:58.604 LIB libspdk_env_dpdk_rpc.a 00:03:58.604 SO libspdk_env_dpdk_rpc.so.5.0 00:03:58.604 CC module/accel/ioat/accel_ioat_rpc.o 00:03:58.604 LIB libspdk_scheduler_gscheduler.a 00:03:58.604 LIB libspdk_scheduler_dynamic.a 00:03:58.604 LIB libspdk_scheduler_dpdk_governor.a 00:03:58.604 SYMLINK libspdk_env_dpdk_rpc.so 00:03:58.604 CC module/accel/iaa/accel_iaa_rpc.o 00:03:58.604 CC module/accel/error/accel_error_rpc.o 00:03:58.604 SO libspdk_scheduler_gscheduler.so.3.0 00:03:58.604 SO libspdk_scheduler_dynamic.so.3.0 00:03:58.604 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:58.604 CC module/accel/dsa/accel_dsa_rpc.o 00:03:58.604 LIB libspdk_blob_bdev.a 00:03:58.604 SYMLINK libspdk_scheduler_dynamic.so 00:03:58.604 SYMLINK libspdk_scheduler_gscheduler.so 00:03:58.604 SO libspdk_blob_bdev.so.10.1 00:03:58.604 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:58.604 LIB libspdk_accel_ioat.a 00:03:58.604 SYMLINK libspdk_blob_bdev.so 00:03:58.604 SO libspdk_accel_ioat.so.5.0 00:03:58.864 LIB libspdk_accel_iaa.a 00:03:58.864 LIB libspdk_accel_error.a 00:03:58.864 LIB libspdk_accel_dsa.a 00:03:58.864 SO libspdk_accel_iaa.so.2.0 00:03:58.864 SYMLINK libspdk_accel_ioat.so 00:03:58.864 SO libspdk_accel_error.so.1.0 00:03:58.864 SO libspdk_accel_dsa.so.4.0 00:03:58.864 SYMLINK libspdk_accel_iaa.so 00:03:58.864 SYMLINK libspdk_accel_error.so 00:03:58.864 SYMLINK libspdk_accel_dsa.so 00:03:58.864 CC module/blobfs/bdev/blobfs_bdev.o 00:03:58.864 CC module/bdev/delay/vbdev_delay.o 00:03:58.864 CC module/bdev/lvol/vbdev_lvol.o 00:03:58.864 CC module/bdev/error/vbdev_error.o 00:03:58.864 CC module/bdev/gpt/gpt.o 00:03:58.864 CC module/bdev/malloc/bdev_malloc.o 00:03:58.864 CC module/bdev/null/bdev_null.o 00:03:58.864 CC module/bdev/passthru/vbdev_passthru.o 00:03:58.864 CC module/bdev/nvme/bdev_nvme.o 00:03:59.123 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.123 LIB libspdk_sock_posix.a 00:03:59.123 CC module/bdev/gpt/vbdev_gpt.o 00:03:59.123 SO libspdk_sock_posix.so.5.0 00:03:59.123 SYMLINK libspdk_sock_posix.so 00:03:59.123 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:59.123 CC module/bdev/error/vbdev_error_rpc.o 00:03:59.123 CC module/bdev/null/bdev_null_rpc.o 00:03:59.123 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:59.123 LIB libspdk_blobfs_bdev.a 00:03:59.123 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:59.382 SO libspdk_blobfs_bdev.so.5.0 00:03:59.382 CC module/bdev/raid/bdev_raid.o 00:03:59.382 SYMLINK libspdk_blobfs_bdev.so 00:03:59.382 CC module/bdev/raid/bdev_raid_rpc.o 00:03:59.382 LIB libspdk_bdev_gpt.a 00:03:59.382 LIB libspdk_bdev_passthru.a 00:03:59.382 LIB libspdk_bdev_error.a 00:03:59.382 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:59.382 LIB libspdk_bdev_null.a 00:03:59.382 SO libspdk_bdev_gpt.so.5.0 00:03:59.382 SO libspdk_bdev_passthru.so.5.0 00:03:59.382 LIB libspdk_bdev_delay.a 00:03:59.382 SO libspdk_bdev_error.so.5.0 00:03:59.382 SO libspdk_bdev_null.so.5.0 00:03:59.382 LIB libspdk_bdev_malloc.a 00:03:59.382 SO libspdk_bdev_delay.so.5.0 00:03:59.382 SYMLINK libspdk_bdev_gpt.so 00:03:59.382 SYMLINK libspdk_bdev_error.so 00:03:59.382 SYMLINK libspdk_bdev_passthru.so 00:03:59.382 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:59.382 SYMLINK libspdk_bdev_null.so 00:03:59.382 CC module/bdev/nvme/nvme_rpc.o 00:03:59.382 CC module/bdev/nvme/bdev_mdns_client.o 00:03:59.382 SO libspdk_bdev_malloc.so.5.0 00:03:59.382 SYMLINK libspdk_bdev_delay.so 00:03:59.641 CC module/bdev/nvme/vbdev_opal.o 00:03:59.641 SYMLINK libspdk_bdev_malloc.so 00:03:59.641 CC module/bdev/split/vbdev_split.o 00:03:59.641 LIB libspdk_bdev_lvol.a 00:03:59.641 CC module/bdev/split/vbdev_split_rpc.o 00:03:59.641 SO libspdk_bdev_lvol.so.5.0 00:03:59.641 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:59.641 CC module/bdev/aio/bdev_aio.o 00:03:59.641 CC module/bdev/aio/bdev_aio_rpc.o 00:03:59.900 SYMLINK libspdk_bdev_lvol.so 00:03:59.900 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:59.900 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:59.900 LIB libspdk_bdev_split.a 00:03:59.900 CC module/bdev/ftl/bdev_ftl.o 00:03:59.900 SO libspdk_bdev_split.so.5.0 00:03:59.900 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:59.900 SYMLINK libspdk_bdev_split.so 00:03:59.900 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:59.900 CC module/bdev/raid/bdev_raid_sb.o 00:04:00.159 LIB libspdk_bdev_aio.a 00:04:00.159 CC module/bdev/raid/raid0.o 00:04:00.159 CC module/bdev/iscsi/bdev_iscsi.o 00:04:00.159 LIB libspdk_bdev_zone_block.a 00:04:00.159 SO libspdk_bdev_aio.so.5.0 00:04:00.159 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:00.159 SO libspdk_bdev_zone_block.so.5.0 00:04:00.159 CC module/bdev/raid/raid1.o 00:04:00.159 SYMLINK libspdk_bdev_aio.so 00:04:00.159 CC module/bdev/raid/concat.o 00:04:00.159 SYMLINK libspdk_bdev_zone_block.so 00:04:00.159 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:00.159 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:00.159 LIB libspdk_bdev_ftl.a 00:04:00.159 SO libspdk_bdev_ftl.so.5.0 00:04:00.159 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:00.159 SYMLINK libspdk_bdev_ftl.so 00:04:00.418 LIB libspdk_bdev_raid.a 00:04:00.418 SO libspdk_bdev_raid.so.5.0 00:04:00.418 LIB libspdk_bdev_iscsi.a 00:04:00.418 SO libspdk_bdev_iscsi.so.5.0 00:04:00.418 SYMLINK libspdk_bdev_raid.so 00:04:00.418 SYMLINK libspdk_bdev_iscsi.so 00:04:00.679 LIB libspdk_bdev_virtio.a 00:04:00.679 SO libspdk_bdev_virtio.so.5.0 00:04:00.679 SYMLINK libspdk_bdev_virtio.so 00:04:00.940 LIB libspdk_bdev_nvme.a 00:04:00.940 SO libspdk_bdev_nvme.so.6.0 00:04:01.199 SYMLINK libspdk_bdev_nvme.so 00:04:01.459 CC module/event/subsystems/iobuf/iobuf.o 00:04:01.459 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:01.459 CC module/event/subsystems/scheduler/scheduler.o 00:04:01.459 CC module/event/subsystems/vmd/vmd.o 00:04:01.459 CC module/event/subsystems/sock/sock.o 00:04:01.459 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:01.459 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:01.459 LIB libspdk_event_vmd.a 00:04:01.459 LIB libspdk_event_vhost_blk.a 00:04:01.459 LIB libspdk_event_scheduler.a 00:04:01.459 LIB libspdk_event_iobuf.a 00:04:01.459 LIB libspdk_event_sock.a 00:04:01.719 SO libspdk_event_scheduler.so.3.0 00:04:01.719 SO libspdk_event_vhost_blk.so.2.0 00:04:01.719 SO libspdk_event_vmd.so.5.0 00:04:01.719 SO libspdk_event_iobuf.so.2.0 00:04:01.719 SO libspdk_event_sock.so.4.0 00:04:01.719 SYMLINK libspdk_event_scheduler.so 00:04:01.719 SYMLINK libspdk_event_vhost_blk.so 00:04:01.719 SYMLINK libspdk_event_vmd.so 00:04:01.719 SYMLINK libspdk_event_sock.so 00:04:01.719 SYMLINK libspdk_event_iobuf.so 00:04:01.719 CC module/event/subsystems/accel/accel.o 00:04:01.978 LIB libspdk_event_accel.a 00:04:01.978 SO libspdk_event_accel.so.5.0 00:04:01.978 SYMLINK libspdk_event_accel.so 00:04:02.236 CC module/event/subsystems/bdev/bdev.o 00:04:02.494 LIB libspdk_event_bdev.a 00:04:02.494 SO libspdk_event_bdev.so.5.0 00:04:02.494 SYMLINK libspdk_event_bdev.so 00:04:02.753 CC module/event/subsystems/scsi/scsi.o 00:04:02.753 CC module/event/subsystems/nbd/nbd.o 00:04:02.753 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:02.753 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:02.753 CC module/event/subsystems/ublk/ublk.o 00:04:02.753 LIB libspdk_event_nbd.a 00:04:03.011 LIB libspdk_event_ublk.a 00:04:03.011 LIB libspdk_event_scsi.a 00:04:03.011 SO libspdk_event_nbd.so.5.0 00:04:03.011 SO libspdk_event_ublk.so.2.0 00:04:03.011 SO libspdk_event_scsi.so.5.0 00:04:03.011 SYMLINK libspdk_event_nbd.so 00:04:03.011 SYMLINK libspdk_event_ublk.so 00:04:03.011 LIB libspdk_event_nvmf.a 00:04:03.011 SYMLINK libspdk_event_scsi.so 00:04:03.011 SO libspdk_event_nvmf.so.5.0 00:04:03.011 SYMLINK libspdk_event_nvmf.so 00:04:03.269 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:03.269 CC module/event/subsystems/iscsi/iscsi.o 00:04:03.269 LIB libspdk_event_vhost_scsi.a 00:04:03.269 LIB libspdk_event_iscsi.a 00:04:03.269 SO libspdk_event_vhost_scsi.so.2.0 00:04:03.269 SO libspdk_event_iscsi.so.5.0 00:04:03.528 SYMLINK libspdk_event_vhost_scsi.so 00:04:03.528 SYMLINK libspdk_event_iscsi.so 00:04:03.528 SO libspdk.so.5.0 00:04:03.528 SYMLINK libspdk.so 00:04:03.787 CC app/trace_record/trace_record.o 00:04:03.787 CXX app/trace/trace.o 00:04:03.787 TEST_HEADER include/spdk/accel.h 00:04:03.787 TEST_HEADER include/spdk/accel_module.h 00:04:03.787 TEST_HEADER include/spdk/assert.h 00:04:03.787 TEST_HEADER include/spdk/barrier.h 00:04:03.787 TEST_HEADER include/spdk/base64.h 00:04:03.787 TEST_HEADER include/spdk/bdev.h 00:04:03.787 TEST_HEADER include/spdk/bdev_module.h 00:04:03.787 TEST_HEADER include/spdk/bdev_zone.h 00:04:03.787 TEST_HEADER include/spdk/bit_array.h 00:04:03.787 TEST_HEADER include/spdk/bit_pool.h 00:04:03.787 TEST_HEADER include/spdk/blob_bdev.h 00:04:03.787 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:03.787 TEST_HEADER include/spdk/blobfs.h 00:04:03.787 TEST_HEADER include/spdk/blob.h 00:04:03.787 TEST_HEADER include/spdk/conf.h 00:04:03.787 TEST_HEADER include/spdk/config.h 00:04:03.787 TEST_HEADER include/spdk/cpuset.h 00:04:03.787 TEST_HEADER include/spdk/crc16.h 00:04:03.787 TEST_HEADER include/spdk/crc32.h 00:04:03.787 TEST_HEADER include/spdk/crc64.h 00:04:03.787 TEST_HEADER include/spdk/dif.h 00:04:03.787 TEST_HEADER include/spdk/dma.h 00:04:03.787 TEST_HEADER include/spdk/endian.h 00:04:03.787 TEST_HEADER include/spdk/env_dpdk.h 00:04:03.787 TEST_HEADER include/spdk/env.h 00:04:03.787 TEST_HEADER include/spdk/event.h 00:04:03.787 TEST_HEADER include/spdk/fd_group.h 00:04:03.787 TEST_HEADER include/spdk/fd.h 00:04:03.787 CC examples/accel/perf/accel_perf.o 00:04:03.787 TEST_HEADER include/spdk/file.h 00:04:03.787 TEST_HEADER include/spdk/ftl.h 00:04:03.787 TEST_HEADER include/spdk/gpt_spec.h 00:04:03.787 TEST_HEADER include/spdk/hexlify.h 00:04:03.787 TEST_HEADER include/spdk/histogram_data.h 00:04:03.787 TEST_HEADER include/spdk/idxd.h 00:04:03.787 TEST_HEADER include/spdk/idxd_spec.h 00:04:03.787 TEST_HEADER include/spdk/init.h 00:04:03.787 TEST_HEADER include/spdk/ioat.h 00:04:03.787 TEST_HEADER include/spdk/ioat_spec.h 00:04:03.787 TEST_HEADER include/spdk/iscsi_spec.h 00:04:03.787 CC test/accel/dif/dif.o 00:04:03.787 TEST_HEADER include/spdk/json.h 00:04:03.787 CC test/bdev/bdevio/bdevio.o 00:04:03.787 CC test/blobfs/mkfs/mkfs.o 00:04:03.787 TEST_HEADER include/spdk/jsonrpc.h 00:04:03.787 CC examples/bdev/hello_world/hello_bdev.o 00:04:03.787 CC examples/blob/hello_world/hello_blob.o 00:04:03.787 TEST_HEADER include/spdk/likely.h 00:04:03.787 TEST_HEADER include/spdk/log.h 00:04:03.787 TEST_HEADER include/spdk/lvol.h 00:04:03.787 TEST_HEADER include/spdk/memory.h 00:04:03.787 TEST_HEADER include/spdk/mmio.h 00:04:03.787 TEST_HEADER include/spdk/nbd.h 00:04:03.787 TEST_HEADER include/spdk/notify.h 00:04:03.787 TEST_HEADER include/spdk/nvme.h 00:04:03.787 CC test/app/bdev_svc/bdev_svc.o 00:04:03.787 TEST_HEADER include/spdk/nvme_intel.h 00:04:03.787 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:03.787 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:04.046 TEST_HEADER include/spdk/nvme_spec.h 00:04:04.046 TEST_HEADER include/spdk/nvme_zns.h 00:04:04.046 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:04.046 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:04.046 TEST_HEADER include/spdk/nvmf.h 00:04:04.046 TEST_HEADER include/spdk/nvmf_spec.h 00:04:04.046 TEST_HEADER include/spdk/nvmf_transport.h 00:04:04.046 TEST_HEADER include/spdk/opal.h 00:04:04.046 TEST_HEADER include/spdk/opal_spec.h 00:04:04.046 TEST_HEADER include/spdk/pci_ids.h 00:04:04.046 TEST_HEADER include/spdk/pipe.h 00:04:04.046 TEST_HEADER include/spdk/queue.h 00:04:04.046 TEST_HEADER include/spdk/reduce.h 00:04:04.046 TEST_HEADER include/spdk/rpc.h 00:04:04.046 TEST_HEADER include/spdk/scheduler.h 00:04:04.046 TEST_HEADER include/spdk/scsi.h 00:04:04.046 TEST_HEADER include/spdk/scsi_spec.h 00:04:04.046 TEST_HEADER include/spdk/sock.h 00:04:04.046 TEST_HEADER include/spdk/stdinc.h 00:04:04.047 TEST_HEADER include/spdk/string.h 00:04:04.047 TEST_HEADER include/spdk/thread.h 00:04:04.047 TEST_HEADER include/spdk/trace.h 00:04:04.047 TEST_HEADER include/spdk/trace_parser.h 00:04:04.047 TEST_HEADER include/spdk/tree.h 00:04:04.047 TEST_HEADER include/spdk/ublk.h 00:04:04.047 TEST_HEADER include/spdk/util.h 00:04:04.047 TEST_HEADER include/spdk/uuid.h 00:04:04.047 TEST_HEADER include/spdk/version.h 00:04:04.047 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:04.047 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:04.047 TEST_HEADER include/spdk/vhost.h 00:04:04.047 TEST_HEADER include/spdk/vmd.h 00:04:04.047 TEST_HEADER include/spdk/xor.h 00:04:04.047 TEST_HEADER include/spdk/zipf.h 00:04:04.047 CXX test/cpp_headers/accel.o 00:04:04.047 LINK spdk_trace_record 00:04:04.047 LINK mkfs 00:04:04.047 LINK bdev_svc 00:04:04.047 LINK hello_blob 00:04:04.047 LINK hello_bdev 00:04:04.047 CXX test/cpp_headers/accel_module.o 00:04:04.306 CXX test/cpp_headers/assert.o 00:04:04.306 LINK spdk_trace 00:04:04.306 LINK dif 00:04:04.306 LINK accel_perf 00:04:04.306 CXX test/cpp_headers/barrier.o 00:04:04.306 LINK bdevio 00:04:04.306 CC app/nvmf_tgt/nvmf_main.o 00:04:04.306 CC examples/blob/cli/blobcli.o 00:04:04.306 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:04.306 CC examples/bdev/bdevperf/bdevperf.o 00:04:04.306 CC app/iscsi_tgt/iscsi_tgt.o 00:04:04.565 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:04.565 CXX test/cpp_headers/base64.o 00:04:04.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:04.565 CXX test/cpp_headers/bdev.o 00:04:04.565 LINK nvmf_tgt 00:04:04.565 CC app/spdk_tgt/spdk_tgt.o 00:04:04.565 LINK iscsi_tgt 00:04:04.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:04.565 CXX test/cpp_headers/bdev_module.o 00:04:04.824 CXX test/cpp_headers/bdev_zone.o 00:04:04.824 LINK spdk_tgt 00:04:04.824 CC test/dma/test_dma/test_dma.o 00:04:04.824 CXX test/cpp_headers/bit_array.o 00:04:04.824 LINK nvme_fuzz 00:04:04.824 LINK blobcli 00:04:05.083 CXX test/cpp_headers/bit_pool.o 00:04:05.083 CC examples/ioat/perf/perf.o 00:04:05.083 CC app/spdk_lspci/spdk_lspci.o 00:04:05.083 CC examples/nvme/hello_world/hello_world.o 00:04:05.083 LINK vhost_fuzz 00:04:05.083 CC examples/sock/hello_world/hello_sock.o 00:04:05.083 CC app/spdk_nvme_perf/perf.o 00:04:05.083 LINK bdevperf 00:04:05.083 CXX test/cpp_headers/blob_bdev.o 00:04:05.083 LINK spdk_lspci 00:04:05.083 LINK test_dma 00:04:05.083 LINK ioat_perf 00:04:05.083 CC app/spdk_nvme_identify/identify.o 00:04:05.342 LINK hello_world 00:04:05.342 CXX test/cpp_headers/blobfs_bdev.o 00:04:05.342 LINK hello_sock 00:04:05.342 CXX test/cpp_headers/blobfs.o 00:04:05.342 CC examples/ioat/verify/verify.o 00:04:05.342 CC examples/vmd/lsvmd/lsvmd.o 00:04:05.342 CC examples/nvme/reconnect/reconnect.o 00:04:05.342 CXX test/cpp_headers/blob.o 00:04:05.601 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:05.601 CC test/env/mem_callbacks/mem_callbacks.o 00:04:05.601 CC examples/nvme/arbitration/arbitration.o 00:04:05.601 LINK lsvmd 00:04:05.601 LINK verify 00:04:05.601 CXX test/cpp_headers/conf.o 00:04:05.860 CC examples/nvme/hotplug/hotplug.o 00:04:05.860 CC examples/vmd/led/led.o 00:04:05.860 CXX test/cpp_headers/config.o 00:04:05.860 LINK reconnect 00:04:05.860 CXX test/cpp_headers/cpuset.o 00:04:05.860 LINK arbitration 00:04:05.860 LINK spdk_nvme_perf 00:04:05.860 LINK led 00:04:05.860 CXX test/cpp_headers/crc16.o 00:04:05.860 LINK spdk_nvme_identify 00:04:05.860 LINK nvme_manage 00:04:05.860 LINK iscsi_fuzz 00:04:05.860 LINK hotplug 00:04:05.860 CXX test/cpp_headers/crc32.o 00:04:06.119 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:06.119 CC examples/nvme/abort/abort.o 00:04:06.119 LINK mem_callbacks 00:04:06.119 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:06.119 CXX test/cpp_headers/crc64.o 00:04:06.119 CC app/spdk_nvme_discover/discovery_aer.o 00:04:06.119 CC test/nvme/aer/aer.o 00:04:06.119 CC test/app/histogram_perf/histogram_perf.o 00:04:06.119 CC test/event/event_perf/event_perf.o 00:04:06.119 CC test/env/vtophys/vtophys.o 00:04:06.119 LINK cmb_copy 00:04:06.378 CC test/lvol/esnap/esnap.o 00:04:06.378 LINK pmr_persistence 00:04:06.378 CXX test/cpp_headers/dif.o 00:04:06.378 LINK histogram_perf 00:04:06.378 LINK spdk_nvme_discover 00:04:06.378 LINK event_perf 00:04:06.378 CXX test/cpp_headers/dma.o 00:04:06.378 LINK vtophys 00:04:06.378 LINK abort 00:04:06.378 CXX test/cpp_headers/endian.o 00:04:06.378 LINK aer 00:04:06.378 CC test/app/jsoncat/jsoncat.o 00:04:06.637 CC app/spdk_top/spdk_top.o 00:04:06.637 CC test/rpc_client/rpc_client_test.o 00:04:06.637 CC test/event/reactor/reactor.o 00:04:06.637 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:06.637 CXX test/cpp_headers/env_dpdk.o 00:04:06.637 LINK jsoncat 00:04:06.637 CC test/thread/poller_perf/poller_perf.o 00:04:06.637 CC test/nvme/reset/reset.o 00:04:06.637 CC examples/nvmf/nvmf/nvmf.o 00:04:06.637 LINK reactor 00:04:06.637 LINK rpc_client_test 00:04:06.637 CXX test/cpp_headers/env.o 00:04:06.637 LINK env_dpdk_post_init 00:04:06.896 LINK poller_perf 00:04:06.896 CC test/app/stub/stub.o 00:04:06.896 CXX test/cpp_headers/event.o 00:04:06.896 CC test/event/reactor_perf/reactor_perf.o 00:04:06.896 CC test/env/memory/memory_ut.o 00:04:06.896 LINK reset 00:04:06.896 CC test/nvme/sgl/sgl.o 00:04:06.896 LINK nvmf 00:04:06.896 CC test/nvme/e2edp/nvme_dp.o 00:04:06.896 CXX test/cpp_headers/fd_group.o 00:04:06.896 LINK stub 00:04:06.896 LINK reactor_perf 00:04:07.155 CXX test/cpp_headers/fd.o 00:04:07.155 CXX test/cpp_headers/file.o 00:04:07.155 CC test/nvme/overhead/overhead.o 00:04:07.155 LINK sgl 00:04:07.155 CC test/event/app_repeat/app_repeat.o 00:04:07.155 CXX test/cpp_headers/ftl.o 00:04:07.155 LINK nvme_dp 00:04:07.155 CC examples/util/zipf/zipf.o 00:04:07.155 CXX test/cpp_headers/gpt_spec.o 00:04:07.415 LINK spdk_top 00:04:07.415 LINK app_repeat 00:04:07.415 LINK zipf 00:04:07.415 CC test/nvme/err_injection/err_injection.o 00:04:07.415 CXX test/cpp_headers/hexlify.o 00:04:07.415 LINK overhead 00:04:07.415 CC examples/thread/thread/thread_ex.o 00:04:07.415 CC examples/idxd/perf/perf.o 00:04:07.674 CC app/vhost/vhost.o 00:04:07.674 CXX test/cpp_headers/histogram_data.o 00:04:07.674 LINK err_injection 00:04:07.674 CC test/event/scheduler/scheduler.o 00:04:07.674 CXX test/cpp_headers/idxd.o 00:04:07.674 LINK vhost 00:04:07.674 CXX test/cpp_headers/idxd_spec.o 00:04:07.674 LINK memory_ut 00:04:07.674 LINK thread 00:04:07.674 CC test/nvme/startup/startup.o 00:04:07.674 CC test/nvme/reserve/reserve.o 00:04:07.674 LINK scheduler 00:04:07.933 LINK idxd_perf 00:04:07.933 CXX test/cpp_headers/init.o 00:04:07.933 LINK startup 00:04:07.933 CXX test/cpp_headers/ioat.o 00:04:07.933 CC test/env/pci/pci_ut.o 00:04:07.933 LINK reserve 00:04:07.933 CC app/spdk_dd/spdk_dd.o 00:04:07.933 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:08.192 CC app/fio/nvme/fio_plugin.o 00:04:08.192 CXX test/cpp_headers/ioat_spec.o 00:04:08.192 CC test/nvme/simple_copy/simple_copy.o 00:04:08.192 CXX test/cpp_headers/iscsi_spec.o 00:04:08.192 CC app/fio/bdev/fio_plugin.o 00:04:08.192 LINK interrupt_tgt 00:04:08.192 CXX test/cpp_headers/json.o 00:04:08.192 CC test/nvme/connect_stress/connect_stress.o 00:04:08.451 LINK pci_ut 00:04:08.451 LINK spdk_dd 00:04:08.451 CXX test/cpp_headers/jsonrpc.o 00:04:08.451 LINK simple_copy 00:04:08.451 LINK connect_stress 00:04:08.451 CXX test/cpp_headers/likely.o 00:04:08.709 CC test/nvme/boot_partition/boot_partition.o 00:04:08.709 CXX test/cpp_headers/log.o 00:04:08.709 LINK spdk_nvme 00:04:08.709 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.710 CC test/nvme/compliance/nvme_compliance.o 00:04:08.710 LINK spdk_bdev 00:04:08.710 CXX test/cpp_headers/lvol.o 00:04:08.710 CXX test/cpp_headers/memory.o 00:04:08.710 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.967 CXX test/cpp_headers/mmio.o 00:04:08.967 LINK boot_partition 00:04:08.967 LINK fused_ordering 00:04:08.967 CC test/nvme/fdp/fdp.o 00:04:08.967 CXX test/cpp_headers/nbd.o 00:04:08.967 CXX test/cpp_headers/notify.o 00:04:08.967 CXX test/cpp_headers/nvme.o 00:04:08.967 LINK nvme_compliance 00:04:08.967 CXX test/cpp_headers/nvme_intel.o 00:04:08.967 LINK doorbell_aers 00:04:08.967 CC test/nvme/cuse/cuse.o 00:04:08.967 CXX test/cpp_headers/nvme_ocssd.o 00:04:09.227 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:09.227 CXX test/cpp_headers/nvme_spec.o 00:04:09.227 CXX test/cpp_headers/nvme_zns.o 00:04:09.227 CXX test/cpp_headers/nvmf_cmd.o 00:04:09.227 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:09.227 LINK fdp 00:04:09.227 CXX test/cpp_headers/nvmf.o 00:04:09.227 CXX test/cpp_headers/nvmf_spec.o 00:04:09.227 CXX test/cpp_headers/nvmf_transport.o 00:04:09.227 CXX test/cpp_headers/opal.o 00:04:09.227 CXX test/cpp_headers/opal_spec.o 00:04:09.227 CXX test/cpp_headers/pci_ids.o 00:04:09.485 CXX test/cpp_headers/pipe.o 00:04:09.485 CXX test/cpp_headers/queue.o 00:04:09.486 CXX test/cpp_headers/reduce.o 00:04:09.486 CXX test/cpp_headers/rpc.o 00:04:09.486 CXX test/cpp_headers/scheduler.o 00:04:09.486 CXX test/cpp_headers/scsi.o 00:04:09.486 CXX test/cpp_headers/scsi_spec.o 00:04:09.486 CXX test/cpp_headers/sock.o 00:04:09.486 CXX test/cpp_headers/stdinc.o 00:04:09.486 CXX test/cpp_headers/string.o 00:04:09.744 CXX test/cpp_headers/thread.o 00:04:09.744 CXX test/cpp_headers/trace.o 00:04:09.744 CXX test/cpp_headers/trace_parser.o 00:04:09.744 CXX test/cpp_headers/tree.o 00:04:09.744 CXX test/cpp_headers/ublk.o 00:04:09.744 CXX test/cpp_headers/util.o 00:04:09.744 CXX test/cpp_headers/uuid.o 00:04:09.744 CXX test/cpp_headers/version.o 00:04:09.744 CXX test/cpp_headers/vfio_user_pci.o 00:04:09.744 CXX test/cpp_headers/vfio_user_spec.o 00:04:09.744 CXX test/cpp_headers/vhost.o 00:04:09.744 CXX test/cpp_headers/vmd.o 00:04:09.744 CXX test/cpp_headers/xor.o 00:04:09.744 CXX test/cpp_headers/zipf.o 00:04:10.015 LINK cuse 00:04:10.624 LINK esnap 00:04:13.910 00:04:13.910 real 0m50.137s 00:04:13.910 user 4m37.699s 00:04:13.910 sys 1m4.556s 00:04:13.910 00:38:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:13.910 ************************************ 00:04:13.910 END TEST make 00:04:13.910 00:38:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.910 ************************************ 00:04:13.910 00:38:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:13.910 00:38:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:13.910 00:38:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:13.910 00:38:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:13.910 00:38:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:13.910 00:38:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:13.910 00:38:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:13.910 00:38:26 -- scripts/common.sh@335 -- # IFS=.-: 00:04:13.910 00:38:26 -- scripts/common.sh@335 -- # read -ra ver1 00:04:13.910 00:38:26 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.910 00:38:26 -- scripts/common.sh@336 -- # read -ra ver2 00:04:13.910 00:38:26 -- scripts/common.sh@337 -- # local 'op=<' 00:04:13.910 00:38:26 -- scripts/common.sh@339 -- # ver1_l=2 00:04:13.910 00:38:26 -- scripts/common.sh@340 -- # ver2_l=1 00:04:13.910 00:38:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:13.910 00:38:26 -- scripts/common.sh@343 -- # case "$op" in 00:04:13.910 00:38:26 -- scripts/common.sh@344 -- # : 1 00:04:13.910 00:38:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:13.910 00:38:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.910 00:38:26 -- scripts/common.sh@364 -- # decimal 1 00:04:13.910 00:38:26 -- scripts/common.sh@352 -- # local d=1 00:04:13.910 00:38:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.910 00:38:26 -- scripts/common.sh@354 -- # echo 1 00:04:13.910 00:38:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:13.910 00:38:26 -- scripts/common.sh@365 -- # decimal 2 00:04:13.910 00:38:26 -- scripts/common.sh@352 -- # local d=2 00:04:13.910 00:38:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.910 00:38:26 -- scripts/common.sh@354 -- # echo 2 00:04:13.910 00:38:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:13.910 00:38:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:13.910 00:38:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:13.910 00:38:26 -- scripts/common.sh@367 -- # return 0 00:04:13.910 00:38:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.910 00:38:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:13.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.910 --rc genhtml_branch_coverage=1 00:04:13.910 --rc genhtml_function_coverage=1 00:04:13.910 --rc genhtml_legend=1 00:04:13.910 --rc geninfo_all_blocks=1 00:04:13.910 --rc geninfo_unexecuted_blocks=1 00:04:13.910 00:04:13.910 ' 00:04:13.910 00:38:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:13.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.910 --rc genhtml_branch_coverage=1 00:04:13.910 --rc genhtml_function_coverage=1 00:04:13.910 --rc genhtml_legend=1 00:04:13.910 --rc geninfo_all_blocks=1 00:04:13.910 --rc geninfo_unexecuted_blocks=1 00:04:13.910 00:04:13.910 ' 00:04:13.911 00:38:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:13.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.911 --rc genhtml_branch_coverage=1 00:04:13.911 --rc genhtml_function_coverage=1 00:04:13.911 --rc genhtml_legend=1 00:04:13.911 --rc geninfo_all_blocks=1 00:04:13.911 --rc geninfo_unexecuted_blocks=1 00:04:13.911 00:04:13.911 ' 00:04:13.911 00:38:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:13.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.911 --rc genhtml_branch_coverage=1 00:04:13.911 --rc genhtml_function_coverage=1 00:04:13.911 --rc genhtml_legend=1 00:04:13.911 --rc geninfo_all_blocks=1 00:04:13.911 --rc geninfo_unexecuted_blocks=1 00:04:13.911 00:04:13.911 ' 00:04:13.911 00:38:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.911 00:38:26 -- nvmf/common.sh@7 -- # uname -s 00:04:13.911 00:38:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.911 00:38:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.911 00:38:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.911 00:38:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.911 00:38:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.911 00:38:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.911 00:38:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.911 00:38:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.911 00:38:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.911 00:38:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.911 00:38:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:04:13.911 00:38:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:04:13.911 00:38:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.911 00:38:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.911 00:38:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:13.911 00:38:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.911 00:38:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.911 00:38:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.911 00:38:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.911 00:38:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.911 00:38:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.911 00:38:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.911 00:38:26 -- paths/export.sh@5 -- # export PATH 00:04:13.911 00:38:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.911 00:38:26 -- nvmf/common.sh@46 -- # : 0 00:04:13.911 00:38:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:13.911 00:38:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:13.911 00:38:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:13.911 00:38:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.911 00:38:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.911 00:38:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:13.911 00:38:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:13.911 00:38:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:13.911 00:38:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:13.911 00:38:26 -- spdk/autotest.sh@32 -- # uname -s 00:04:13.911 00:38:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:13.911 00:38:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:13.911 00:38:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.911 00:38:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:13.911 00:38:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.911 00:38:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:13.911 00:38:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:13.911 00:38:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:13.911 00:38:26 -- spdk/autotest.sh@48 -- # udevadm_pid=61807 00:04:13.911 00:38:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:13.911 00:38:26 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:13.911 00:38:26 -- spdk/autotest.sh@54 -- # echo 61814 00:04:13.911 00:38:26 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:13.911 00:38:26 -- spdk/autotest.sh@56 -- # echo 61817 00:04:13.911 00:38:26 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:13.911 00:38:26 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:13.911 00:38:26 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:13.911 00:38:26 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:13.911 00:38:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.911 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:13.911 00:38:26 -- spdk/autotest.sh@70 -- # create_test_list 00:04:13.911 00:38:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:13.911 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:13.911 00:38:26 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:13.911 00:38:26 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:13.911 00:38:26 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:13.911 00:38:26 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.911 00:38:26 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:13.911 00:38:26 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:13.911 00:38:26 -- common/autotest_common.sh@1450 -- # uname 00:04:13.911 00:38:26 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:13.911 00:38:26 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:13.911 00:38:26 -- common/autotest_common.sh@1470 -- # uname 00:04:13.911 00:38:26 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:13.911 00:38:26 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:13.911 00:38:26 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:13.911 lcov: LCOV version 1.15 00:04:13.911 00:38:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:22.026 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:22.026 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:22.026 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:22.026 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:22.026 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:22.026 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:40.136 00:38:50 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:40.136 00:38:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.136 00:38:50 -- common/autotest_common.sh@10 -- # set +x 00:04:40.136 00:38:50 -- spdk/autotest.sh@89 -- # rm -f 00:04:40.136 00:38:50 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.136 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:40.136 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:40.136 00:38:51 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:40.136 00:38:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:40.136 00:38:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:40.136 00:38:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:40.136 00:38:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.136 00:38:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:40.136 00:38:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:40.136 00:38:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.136 00:38:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.136 00:38:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.136 00:38:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:40.137 00:38:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:40.137 00:38:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:40.137 00:38:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.137 00:38:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.137 00:38:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:40.137 00:38:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:40.137 00:38:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:40.137 00:38:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.137 00:38:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.137 00:38:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:40.137 00:38:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:40.137 00:38:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:40.137 00:38:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.137 00:38:51 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:40.137 00:38:51 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:40.137 00:38:51 -- spdk/autotest.sh@108 -- # grep -v p 00:04:40.137 00:38:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.137 00:38:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.137 00:38:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:40.137 00:38:51 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:40.137 00:38:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:40.137 No valid GPT data, bailing 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # pt= 00:04:40.137 00:38:51 -- scripts/common.sh@394 -- # return 1 00:04:40.137 00:38:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:40.137 1+0 records in 00:04:40.137 1+0 records out 00:04:40.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519691 s, 202 MB/s 00:04:40.137 00:38:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.137 00:38:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.137 00:38:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:40.137 00:38:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:40.137 00:38:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:40.137 No valid GPT data, bailing 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # pt= 00:04:40.137 00:38:51 -- scripts/common.sh@394 -- # return 1 00:04:40.137 00:38:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:40.137 1+0 records in 00:04:40.137 1+0 records out 00:04:40.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479284 s, 219 MB/s 00:04:40.137 00:38:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.137 00:38:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.137 00:38:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:40.137 00:38:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:40.137 00:38:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:40.137 No valid GPT data, bailing 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # pt= 00:04:40.137 00:38:51 -- scripts/common.sh@394 -- # return 1 00:04:40.137 00:38:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:40.137 1+0 records in 00:04:40.137 1+0 records out 00:04:40.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487768 s, 215 MB/s 00:04:40.137 00:38:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.137 00:38:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.137 00:38:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:40.137 00:38:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:40.137 00:38:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:40.137 No valid GPT data, bailing 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:40.137 00:38:51 -- scripts/common.sh@393 -- # pt= 00:04:40.137 00:38:51 -- scripts/common.sh@394 -- # return 1 00:04:40.137 00:38:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:40.137 1+0 records in 00:04:40.137 1+0 records out 00:04:40.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425886 s, 246 MB/s 00:04:40.137 00:38:51 -- spdk/autotest.sh@116 -- # sync 00:04:40.137 00:38:51 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:40.137 00:38:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:40.137 00:38:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.516 00:38:53 -- spdk/autotest.sh@122 -- # uname -s 00:04:41.516 00:38:53 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:41.516 00:38:53 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:41.516 00:38:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.516 00:38:53 -- common/autotest_common.sh@10 -- # set +x 00:04:41.516 ************************************ 00:04:41.516 START TEST setup.sh 00:04:41.516 ************************************ 00:04:41.516 00:38:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:41.516 * Looking for test storage... 00:04:41.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.516 00:38:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.516 00:38:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.516 00:38:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.516 00:38:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.516 00:38:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.516 00:38:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.516 00:38:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.516 00:38:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.516 00:38:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.516 00:38:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.516 00:38:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.516 00:38:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.516 00:38:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.516 00:38:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.516 00:38:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.516 00:38:53 -- scripts/common.sh@344 -- # : 1 00:04:41.516 00:38:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.516 00:38:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.516 00:38:53 -- scripts/common.sh@364 -- # decimal 1 00:04:41.516 00:38:53 -- scripts/common.sh@352 -- # local d=1 00:04:41.516 00:38:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.516 00:38:53 -- scripts/common.sh@354 -- # echo 1 00:04:41.516 00:38:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.516 00:38:53 -- scripts/common.sh@365 -- # decimal 2 00:04:41.516 00:38:53 -- scripts/common.sh@352 -- # local d=2 00:04:41.516 00:38:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.516 00:38:53 -- scripts/common.sh@354 -- # echo 2 00:04:41.516 00:38:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.516 00:38:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.516 00:38:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.516 00:38:53 -- scripts/common.sh@367 -- # return 0 00:04:41.516 00:38:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.516 --rc genhtml_branch_coverage=1 00:04:41.516 --rc genhtml_function_coverage=1 00:04:41.516 --rc genhtml_legend=1 00:04:41.516 --rc geninfo_all_blocks=1 00:04:41.516 --rc geninfo_unexecuted_blocks=1 00:04:41.516 00:04:41.516 ' 00:04:41.516 00:38:53 -- setup/test-setup.sh@10 -- # uname -s 00:04:41.516 00:38:53 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:41.516 00:38:53 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:41.516 00:38:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.516 00:38:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.516 00:38:53 -- common/autotest_common.sh@10 -- # set +x 00:04:41.516 ************************************ 00:04:41.516 START TEST acl 00:04:41.516 ************************************ 00:04:41.516 00:38:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:41.776 * Looking for test storage... 00:04:41.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.776 00:38:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.776 00:38:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.776 00:38:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.776 00:38:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.776 00:38:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.776 00:38:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.776 00:38:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.776 00:38:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.776 00:38:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.776 00:38:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.776 00:38:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.776 00:38:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.776 00:38:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.776 00:38:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.776 00:38:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.776 00:38:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.776 00:38:54 -- scripts/common.sh@344 -- # : 1 00:04:41.776 00:38:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.776 00:38:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.776 00:38:54 -- scripts/common.sh@364 -- # decimal 1 00:04:41.776 00:38:54 -- scripts/common.sh@352 -- # local d=1 00:04:41.776 00:38:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.776 00:38:54 -- scripts/common.sh@354 -- # echo 1 00:04:41.776 00:38:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.776 00:38:54 -- scripts/common.sh@365 -- # decimal 2 00:04:41.776 00:38:54 -- scripts/common.sh@352 -- # local d=2 00:04:41.776 00:38:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.776 00:38:54 -- scripts/common.sh@354 -- # echo 2 00:04:41.776 00:38:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.777 00:38:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.777 00:38:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.777 00:38:54 -- scripts/common.sh@367 -- # return 0 00:04:41.777 00:38:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.777 00:38:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.777 --rc genhtml_branch_coverage=1 00:04:41.777 --rc genhtml_function_coverage=1 00:04:41.777 --rc genhtml_legend=1 00:04:41.777 --rc geninfo_all_blocks=1 00:04:41.777 --rc geninfo_unexecuted_blocks=1 00:04:41.777 00:04:41.777 ' 00:04:41.777 00:38:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.777 --rc genhtml_branch_coverage=1 00:04:41.777 --rc genhtml_function_coverage=1 00:04:41.777 --rc genhtml_legend=1 00:04:41.777 --rc geninfo_all_blocks=1 00:04:41.777 --rc geninfo_unexecuted_blocks=1 00:04:41.777 00:04:41.777 ' 00:04:41.777 00:38:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.777 --rc genhtml_branch_coverage=1 00:04:41.777 --rc genhtml_function_coverage=1 00:04:41.777 --rc genhtml_legend=1 00:04:41.777 --rc geninfo_all_blocks=1 00:04:41.777 --rc geninfo_unexecuted_blocks=1 00:04:41.777 00:04:41.777 ' 00:04:41.777 00:38:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.777 --rc genhtml_branch_coverage=1 00:04:41.777 --rc genhtml_function_coverage=1 00:04:41.777 --rc genhtml_legend=1 00:04:41.777 --rc geninfo_all_blocks=1 00:04:41.777 --rc geninfo_unexecuted_blocks=1 00:04:41.777 00:04:41.777 ' 00:04:41.777 00:38:54 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:41.777 00:38:54 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:41.777 00:38:54 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:41.777 00:38:54 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:41.777 00:38:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:41.777 00:38:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:41.777 00:38:54 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:41.777 00:38:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:41.777 00:38:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:41.777 00:38:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:41.777 00:38:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:41.777 00:38:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:41.777 00:38:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:41.777 00:38:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:41.777 00:38:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:41.777 00:38:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:41.777 00:38:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:41.777 00:38:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:41.777 00:38:54 -- setup/acl.sh@12 -- # devs=() 00:04:41.777 00:38:54 -- setup/acl.sh@12 -- # declare -a devs 00:04:41.777 00:38:54 -- setup/acl.sh@13 -- # drivers=() 00:04:41.777 00:38:54 -- setup/acl.sh@13 -- # declare -A drivers 00:04:41.777 00:38:54 -- setup/acl.sh@51 -- # setup reset 00:04:41.777 00:38:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.777 00:38:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.715 00:38:54 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:42.715 00:38:54 -- setup/acl.sh@16 -- # local dev driver 00:04:42.715 00:38:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.715 00:38:54 -- setup/acl.sh@15 -- # setup output status 00:04:42.715 00:38:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.715 00:38:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.715 Hugepages 00:04:42.715 node hugesize free / total 00:04:42.715 00:38:55 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.715 00:38:55 -- setup/acl.sh@19 -- # continue 00:04:42.715 00:38:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.715 00:04:42.715 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.715 00:38:55 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.715 00:38:55 -- setup/acl.sh@19 -- # continue 00:04:42.715 00:38:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.716 00:38:55 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:42.716 00:38:55 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:42.716 00:38:55 -- setup/acl.sh@20 -- # continue 00:04:42.716 00:38:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.975 00:38:55 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:42.975 00:38:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.975 00:38:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:42.975 00:38:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.975 00:38:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.975 00:38:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.975 00:38:55 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:42.975 00:38:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.975 00:38:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:42.975 00:38:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.975 00:38:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.975 00:38:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.975 00:38:55 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:42.975 00:38:55 -- setup/acl.sh@54 -- # run_test denied denied 00:04:42.975 00:38:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.975 00:38:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.976 00:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:42.976 ************************************ 00:04:42.976 START TEST denied 00:04:42.976 ************************************ 00:04:42.976 00:38:55 -- common/autotest_common.sh@1114 -- # denied 00:04:42.976 00:38:55 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:42.976 00:38:55 -- setup/acl.sh@38 -- # setup output config 00:04:42.976 00:38:55 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:42.976 00:38:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.976 00:38:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.910 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:43.910 00:38:56 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:43.910 00:38:56 -- setup/acl.sh@28 -- # local dev driver 00:04:43.910 00:38:56 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.910 00:38:56 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:43.910 00:38:56 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:43.910 00:38:56 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.910 00:38:56 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.910 00:38:56 -- setup/acl.sh@41 -- # setup reset 00:04:43.910 00:38:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.910 00:38:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.478 00:04:44.478 real 0m1.518s 00:04:44.478 user 0m0.639s 00:04:44.478 sys 0m0.851s 00:04:44.478 00:38:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.478 00:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:44.478 ************************************ 00:04:44.478 END TEST denied 00:04:44.478 ************************************ 00:04:44.478 00:38:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:44.478 00:38:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.478 00:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.478 00:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:44.478 ************************************ 00:04:44.478 START TEST allowed 00:04:44.478 ************************************ 00:04:44.478 00:38:56 -- common/autotest_common.sh@1114 -- # allowed 00:04:44.478 00:38:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:44.478 00:38:56 -- setup/acl.sh@45 -- # setup output config 00:04:44.478 00:38:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.478 00:38:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.478 00:38:56 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:45.416 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.416 00:38:57 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:45.416 00:38:57 -- setup/acl.sh@28 -- # local dev driver 00:04:45.416 00:38:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:45.416 00:38:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:45.416 00:38:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:45.416 00:38:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:45.416 00:38:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:45.416 00:38:57 -- setup/acl.sh@48 -- # setup reset 00:04:45.416 00:38:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.416 00:38:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.985 00:04:45.985 real 0m1.572s 00:04:45.985 user 0m0.700s 00:04:45.985 sys 0m0.885s 00:04:45.985 00:38:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.985 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:45.985 ************************************ 00:04:45.985 END TEST allowed 00:04:45.985 ************************************ 00:04:46.246 00:04:46.246 real 0m4.506s 00:04:46.246 user 0m1.992s 00:04:46.246 sys 0m2.523s 00:04:46.246 00:38:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.246 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:46.246 ************************************ 00:04:46.246 END TEST acl 00:04:46.246 ************************************ 00:04:46.246 00:38:58 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:46.246 00:38:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.246 00:38:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.246 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:46.246 ************************************ 00:04:46.246 START TEST hugepages 00:04:46.246 ************************************ 00:04:46.246 00:38:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:46.246 * Looking for test storage... 00:04:46.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.246 00:38:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.246 00:38:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.246 00:38:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.246 00:38:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.246 00:38:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.246 00:38:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.246 00:38:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.246 00:38:58 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.246 00:38:58 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.246 00:38:58 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.246 00:38:58 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.246 00:38:58 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.246 00:38:58 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.246 00:38:58 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.246 00:38:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.246 00:38:58 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.246 00:38:58 -- scripts/common.sh@344 -- # : 1 00:04:46.246 00:38:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.246 00:38:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.246 00:38:58 -- scripts/common.sh@364 -- # decimal 1 00:04:46.246 00:38:58 -- scripts/common.sh@352 -- # local d=1 00:04:46.246 00:38:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.246 00:38:58 -- scripts/common.sh@354 -- # echo 1 00:04:46.246 00:38:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.246 00:38:58 -- scripts/common.sh@365 -- # decimal 2 00:04:46.246 00:38:58 -- scripts/common.sh@352 -- # local d=2 00:04:46.246 00:38:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.246 00:38:58 -- scripts/common.sh@354 -- # echo 2 00:04:46.246 00:38:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.246 00:38:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.246 00:38:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.246 00:38:58 -- scripts/common.sh@367 -- # return 0 00:04:46.246 00:38:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.246 00:38:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.246 --rc genhtml_branch_coverage=1 00:04:46.246 --rc genhtml_function_coverage=1 00:04:46.246 --rc genhtml_legend=1 00:04:46.246 --rc geninfo_all_blocks=1 00:04:46.246 --rc geninfo_unexecuted_blocks=1 00:04:46.246 00:04:46.246 ' 00:04:46.246 00:38:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.246 --rc genhtml_branch_coverage=1 00:04:46.246 --rc genhtml_function_coverage=1 00:04:46.246 --rc genhtml_legend=1 00:04:46.246 --rc geninfo_all_blocks=1 00:04:46.246 --rc geninfo_unexecuted_blocks=1 00:04:46.246 00:04:46.246 ' 00:04:46.246 00:38:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.246 --rc genhtml_branch_coverage=1 00:04:46.246 --rc genhtml_function_coverage=1 00:04:46.246 --rc genhtml_legend=1 00:04:46.246 --rc geninfo_all_blocks=1 00:04:46.246 --rc geninfo_unexecuted_blocks=1 00:04:46.246 00:04:46.246 ' 00:04:46.246 00:38:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.246 --rc genhtml_branch_coverage=1 00:04:46.246 --rc genhtml_function_coverage=1 00:04:46.246 --rc genhtml_legend=1 00:04:46.246 --rc geninfo_all_blocks=1 00:04:46.246 --rc geninfo_unexecuted_blocks=1 00:04:46.246 00:04:46.246 ' 00:04:46.246 00:38:58 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:46.246 00:38:58 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:46.246 00:38:58 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:46.246 00:38:58 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:46.246 00:38:58 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:46.246 00:38:58 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:46.246 00:38:58 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:46.246 00:38:58 -- setup/common.sh@18 -- # local node= 00:04:46.246 00:38:58 -- setup/common.sh@19 -- # local var val 00:04:46.246 00:38:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.246 00:38:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.246 00:38:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.246 00:38:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.246 00:38:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.246 00:38:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.246 00:38:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 4412528 kB' 'MemAvailable: 7342196 kB' 'Buffers: 2684 kB' 'Cached: 3130160 kB' 'SwapCached: 0 kB' 'Active: 496172 kB' 'Inactive: 2753164 kB' 'Active(anon): 127004 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 118120 kB' 'Mapped: 50972 kB' 'Shmem: 10512 kB' 'KReclaimable: 88568 kB' 'Slab: 191020 kB' 'SReclaimable: 88568 kB' 'SUnreclaim: 102452 kB' 'KernelStack: 6820 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 318144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.246 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.246 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.247 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.247 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # continue 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.508 00:38:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.508 00:38:58 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.508 00:38:58 -- setup/common.sh@33 -- # echo 2048 00:04:46.508 00:38:58 -- setup/common.sh@33 -- # return 0 00:04:46.508 00:38:58 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:46.508 00:38:58 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:46.508 00:38:58 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:46.508 00:38:58 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:46.509 00:38:58 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:46.509 00:38:58 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:46.509 00:38:58 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:46.509 00:38:58 -- setup/hugepages.sh@207 -- # get_nodes 00:04:46.509 00:38:58 -- setup/hugepages.sh@27 -- # local node 00:04:46.509 00:38:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.509 00:38:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:46.509 00:38:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.509 00:38:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.509 00:38:58 -- setup/hugepages.sh@208 -- # clear_hp 00:04:46.509 00:38:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:46.509 00:38:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.509 00:38:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.509 00:38:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.509 00:38:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.509 00:38:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.509 00:38:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.509 00:38:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.509 00:38:58 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:46.509 00:38:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.509 00:38:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.509 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:46.509 ************************************ 00:04:46.509 START TEST default_setup 00:04:46.509 ************************************ 00:04:46.509 00:38:58 -- common/autotest_common.sh@1114 -- # default_setup 00:04:46.509 00:38:58 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:46.509 00:38:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.509 00:38:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.509 00:38:58 -- setup/hugepages.sh@51 -- # shift 00:04:46.509 00:38:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.509 00:38:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.509 00:38:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.509 00:38:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.509 00:38:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.509 00:38:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.509 00:38:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.509 00:38:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.509 00:38:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.509 00:38:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.509 00:38:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.509 00:38:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.509 00:38:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.509 00:38:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.509 00:38:58 -- setup/hugepages.sh@73 -- # return 0 00:04:46.509 00:38:58 -- setup/hugepages.sh@137 -- # setup output 00:04:46.509 00:38:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.509 00:38:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.077 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:47.340 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:47.340 00:38:59 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:47.340 00:38:59 -- setup/hugepages.sh@89 -- # local node 00:04:47.340 00:38:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.340 00:38:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.340 00:38:59 -- setup/hugepages.sh@92 -- # local surp 00:04:47.340 00:38:59 -- setup/hugepages.sh@93 -- # local resv 00:04:47.340 00:38:59 -- setup/hugepages.sh@94 -- # local anon 00:04:47.340 00:38:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.340 00:38:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.340 00:38:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.340 00:38:59 -- setup/common.sh@18 -- # local node= 00:04:47.340 00:38:59 -- setup/common.sh@19 -- # local var val 00:04:47.340 00:38:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.340 00:38:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.340 00:38:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.340 00:38:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.340 00:38:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.340 00:38:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6514272 kB' 'MemAvailable: 9443784 kB' 'Buffers: 2684 kB' 'Cached: 3130148 kB' 'SwapCached: 0 kB' 'Active: 497720 kB' 'Inactive: 2753176 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119620 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190640 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102412 kB' 'KernelStack: 6832 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.340 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.340 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.341 00:38:59 -- setup/common.sh@33 -- # echo 0 00:04:47.341 00:38:59 -- setup/common.sh@33 -- # return 0 00:04:47.341 00:38:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.341 00:38:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.341 00:38:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.341 00:38:59 -- setup/common.sh@18 -- # local node= 00:04:47.341 00:38:59 -- setup/common.sh@19 -- # local var val 00:04:47.341 00:38:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.341 00:38:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.341 00:38:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.341 00:38:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.341 00:38:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.341 00:38:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6514272 kB' 'MemAvailable: 9443784 kB' 'Buffers: 2684 kB' 'Cached: 3130148 kB' 'SwapCached: 0 kB' 'Active: 497276 kB' 'Inactive: 2753176 kB' 'Active(anon): 128108 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190636 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102408 kB' 'KernelStack: 6800 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.341 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.341 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.342 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.342 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.342 00:38:59 -- setup/common.sh@33 -- # echo 0 00:04:47.342 00:38:59 -- setup/common.sh@33 -- # return 0 00:04:47.342 00:38:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.342 00:38:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.342 00:38:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.342 00:38:59 -- setup/common.sh@18 -- # local node= 00:04:47.342 00:38:59 -- setup/common.sh@19 -- # local var val 00:04:47.343 00:38:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.343 00:38:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.343 00:38:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.343 00:38:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.343 00:38:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.343 00:38:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6514272 kB' 'MemAvailable: 9443784 kB' 'Buffers: 2684 kB' 'Cached: 3130148 kB' 'SwapCached: 0 kB' 'Active: 497368 kB' 'Inactive: 2753176 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119288 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190636 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102408 kB' 'KernelStack: 6784 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.343 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.343 00:38:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.344 00:38:59 -- setup/common.sh@33 -- # echo 0 00:04:47.344 00:38:59 -- setup/common.sh@33 -- # return 0 00:04:47.344 00:38:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.344 nr_hugepages=1024 00:04:47.344 00:38:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.344 resv_hugepages=0 00:04:47.344 00:38:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.344 surplus_hugepages=0 00:04:47.344 00:38:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.344 anon_hugepages=0 00:04:47.344 00:38:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.344 00:38:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.344 00:38:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.344 00:38:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.344 00:38:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.344 00:38:59 -- setup/common.sh@18 -- # local node= 00:04:47.344 00:38:59 -- setup/common.sh@19 -- # local var val 00:04:47.344 00:38:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.344 00:38:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.344 00:38:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.344 00:38:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.344 00:38:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.344 00:38:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6514272 kB' 'MemAvailable: 9443784 kB' 'Buffers: 2684 kB' 'Cached: 3130148 kB' 'SwapCached: 0 kB' 'Active: 497308 kB' 'Inactive: 2753176 kB' 'Active(anon): 128140 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119280 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190636 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102408 kB' 'KernelStack: 6816 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.344 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.344 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.345 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.345 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.346 00:38:59 -- setup/common.sh@33 -- # echo 1024 00:04:47.346 00:38:59 -- setup/common.sh@33 -- # return 0 00:04:47.346 00:38:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.346 00:38:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.346 00:38:59 -- setup/hugepages.sh@27 -- # local node 00:04:47.346 00:38:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.346 00:38:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.346 00:38:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.346 00:38:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.346 00:38:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.346 00:38:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.346 00:38:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.346 00:38:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.346 00:38:59 -- setup/common.sh@18 -- # local node=0 00:04:47.346 00:38:59 -- setup/common.sh@19 -- # local var val 00:04:47.346 00:38:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.346 00:38:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.346 00:38:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.346 00:38:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.346 00:38:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.346 00:38:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6514272 kB' 'MemUsed: 5724848 kB' 'SwapCached: 0 kB' 'Active: 497284 kB' 'Inactive: 2753176 kB' 'Active(anon): 128116 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3132832 kB' 'Mapped: 50948 kB' 'AnonPages: 119248 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190632 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.346 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.605 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:38:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.605 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # continue 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:38:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:38:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.606 00:38:59 -- setup/common.sh@33 -- # echo 0 00:04:47.606 00:38:59 -- setup/common.sh@33 -- # return 0 00:04:47.606 00:38:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.606 00:38:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.606 00:38:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.606 00:38:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.606 node0=1024 expecting 1024 00:04:47.606 00:38:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.606 00:38:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.606 00:04:47.606 real 0m1.064s 00:04:47.606 user 0m0.522s 00:04:47.606 sys 0m0.486s 00:04:47.606 00:38:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.606 00:38:59 -- common/autotest_common.sh@10 -- # set +x 00:04:47.606 ************************************ 00:04:47.606 END TEST default_setup 00:04:47.606 ************************************ 00:04:47.606 00:38:59 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:47.606 00:38:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.606 00:38:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.606 00:38:59 -- common/autotest_common.sh@10 -- # set +x 00:04:47.606 ************************************ 00:04:47.606 START TEST per_node_1G_alloc 00:04:47.606 ************************************ 00:04:47.606 00:38:59 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:47.606 00:38:59 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:47.606 00:38:59 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:47.606 00:38:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:47.606 00:38:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.606 00:38:59 -- setup/hugepages.sh@51 -- # shift 00:04:47.606 00:38:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.606 00:38:59 -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.606 00:38:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.606 00:38:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:47.606 00:38:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.606 00:38:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.606 00:38:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.606 00:38:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.606 00:38:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.606 00:38:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.606 00:38:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.606 00:38:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.606 00:38:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.606 00:38:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:47.606 00:38:59 -- setup/hugepages.sh@73 -- # return 0 00:04:47.606 00:38:59 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:47.606 00:38:59 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:47.606 00:38:59 -- setup/hugepages.sh@146 -- # setup output 00:04:47.606 00:38:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.606 00:38:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.867 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.867 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.867 00:39:00 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:47.867 00:39:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:47.867 00:39:00 -- setup/hugepages.sh@89 -- # local node 00:04:47.867 00:39:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.867 00:39:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.867 00:39:00 -- setup/hugepages.sh@92 -- # local surp 00:04:47.867 00:39:00 -- setup/hugepages.sh@93 -- # local resv 00:04:47.867 00:39:00 -- setup/hugepages.sh@94 -- # local anon 00:04:47.867 00:39:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.867 00:39:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.867 00:39:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.867 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:47.867 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:47.867 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.867 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.867 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.867 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.867 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.867 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7567920 kB' 'MemAvailable: 10497432 kB' 'Buffers: 2684 kB' 'Cached: 3130148 kB' 'SwapCached: 0 kB' 'Active: 497580 kB' 'Inactive: 2753176 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 51076 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190596 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102368 kB' 'KernelStack: 6788 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.867 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.867 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.868 00:39:00 -- setup/common.sh@33 -- # echo 0 00:04:47.868 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:47.868 00:39:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.868 00:39:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.868 00:39:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.868 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:47.868 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:47.868 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.868 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.868 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.868 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.868 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.868 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7567920 kB' 'MemAvailable: 10497436 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497372 kB' 'Inactive: 2753180 kB' 'Active(anon): 128204 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119368 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190596 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102368 kB' 'KernelStack: 6816 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.868 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.868 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.869 00:39:00 -- setup/common.sh@32 -- # continue 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.869 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.132 00:39:00 -- setup/common.sh@33 -- # echo 0 00:04:48.132 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:48.132 00:39:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.132 00:39:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.132 00:39:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.132 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:48.132 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:48.132 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.132 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.132 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.132 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.132 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.132 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7567668 kB' 'MemAvailable: 10497184 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497296 kB' 'Inactive: 2753180 kB' 'Active(anon): 128128 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119248 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190596 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102368 kB' 'KernelStack: 6800 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.132 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.132 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.133 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.133 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.134 00:39:00 -- setup/common.sh@33 -- # echo 0 00:04:48.134 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:48.134 00:39:00 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.134 nr_hugepages=512 00:04:48.134 00:39:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:48.134 resv_hugepages=0 00:04:48.134 00:39:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.134 surplus_hugepages=0 00:04:48.134 00:39:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.134 anon_hugepages=0 00:04:48.134 00:39:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.134 00:39:00 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:48.134 00:39:00 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:48.134 00:39:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.134 00:39:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.134 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:48.134 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:48.134 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.134 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.134 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.134 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.134 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.134 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7567668 kB' 'MemAvailable: 10497184 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497296 kB' 'Inactive: 2753180 kB' 'Active(anon): 128128 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119248 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190596 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102368 kB' 'KernelStack: 6800 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.134 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.134 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.135 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.135 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.135 00:39:00 -- setup/common.sh@33 -- # echo 512 00:04:48.135 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:48.135 00:39:00 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:48.135 00:39:00 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.135 00:39:00 -- setup/hugepages.sh@27 -- # local node 00:04:48.135 00:39:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.135 00:39:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.135 00:39:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.135 00:39:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.135 00:39:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.135 00:39:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.135 00:39:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.135 00:39:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.135 00:39:00 -- setup/common.sh@18 -- # local node=0 00:04:48.135 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:48.135 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.135 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.135 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.136 00:39:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.136 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.136 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7567668 kB' 'MemUsed: 4671452 kB' 'SwapCached: 0 kB' 'Active: 497396 kB' 'Inactive: 2753180 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3132836 kB' 'Mapped: 50948 kB' 'AnonPages: 119348 kB' 'Shmem: 10488 kB' 'KernelStack: 6820 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190596 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.136 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.136 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.137 00:39:00 -- setup/common.sh@33 -- # echo 0 00:04:48.137 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:48.137 00:39:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.137 00:39:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.137 00:39:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.137 node0=512 expecting 512 00:04:48.137 00:39:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:48.137 00:39:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:48.137 00:04:48.137 real 0m0.554s 00:04:48.137 user 0m0.274s 00:04:48.137 sys 0m0.316s 00:04:48.137 00:39:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.137 00:39:00 -- common/autotest_common.sh@10 -- # set +x 00:04:48.137 ************************************ 00:04:48.137 END TEST per_node_1G_alloc 00:04:48.137 ************************************ 00:04:48.137 00:39:00 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:48.137 00:39:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.137 00:39:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.137 00:39:00 -- common/autotest_common.sh@10 -- # set +x 00:04:48.137 ************************************ 00:04:48.137 START TEST even_2G_alloc 00:04:48.137 ************************************ 00:04:48.137 00:39:00 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:48.137 00:39:00 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:48.137 00:39:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.137 00:39:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.137 00:39:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.137 00:39:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.137 00:39:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.137 00:39:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.137 00:39:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.137 00:39:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.137 00:39:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.137 00:39:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:48.137 00:39:00 -- setup/hugepages.sh@83 -- # : 0 00:04:48.137 00:39:00 -- setup/hugepages.sh@84 -- # : 0 00:04:48.137 00:39:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.137 00:39:00 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:48.137 00:39:00 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:48.137 00:39:00 -- setup/hugepages.sh@153 -- # setup output 00:04:48.137 00:39:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.137 00:39:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.396 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.396 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.661 00:39:00 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:48.661 00:39:00 -- setup/hugepages.sh@89 -- # local node 00:04:48.661 00:39:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.661 00:39:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.661 00:39:00 -- setup/hugepages.sh@92 -- # local surp 00:04:48.661 00:39:00 -- setup/hugepages.sh@93 -- # local resv 00:04:48.661 00:39:00 -- setup/hugepages.sh@94 -- # local anon 00:04:48.661 00:39:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.661 00:39:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.661 00:39:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.661 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:48.661 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:48.661 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.661 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.661 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.661 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.661 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.661 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522628 kB' 'MemAvailable: 9452144 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497828 kB' 'Inactive: 2753180 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 51060 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190620 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102392 kB' 'KernelStack: 6872 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 00:39:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.662 00:39:00 -- setup/common.sh@33 -- # echo 0 00:04:48.662 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:48.662 00:39:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:48.662 00:39:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.662 00:39:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.662 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:48.662 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:48.662 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.662 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.662 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.663 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.663 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.663 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522376 kB' 'MemAvailable: 9451892 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497060 kB' 'Inactive: 2753180 kB' 'Active(anon): 127892 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119060 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190632 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102404 kB' 'KernelStack: 6816 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.663 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.663 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.664 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.664 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.665 00:39:00 -- setup/common.sh@33 -- # echo 0 00:04:48.665 00:39:00 -- setup/common.sh@33 -- # return 0 00:04:48.665 00:39:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.665 00:39:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.665 00:39:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.665 00:39:00 -- setup/common.sh@18 -- # local node= 00:04:48.665 00:39:00 -- setup/common.sh@19 -- # local var val 00:04:48.665 00:39:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.665 00:39:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.665 00:39:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.665 00:39:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.665 00:39:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.665 00:39:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6523024 kB' 'MemAvailable: 9452540 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497312 kB' 'Inactive: 2753180 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119308 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190632 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102404 kB' 'KernelStack: 6816 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.665 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.665 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.666 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.666 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.667 00:39:01 -- setup/common.sh@33 -- # echo 0 00:04:48.667 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:48.667 00:39:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.667 00:39:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.667 nr_hugepages=1024 00:04:48.667 00:39:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.667 resv_hugepages=0 00:04:48.667 00:39:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.667 surplus_hugepages=0 00:04:48.667 00:39:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.667 anon_hugepages=0 00:04:48.667 00:39:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.667 00:39:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.667 00:39:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.667 00:39:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.667 00:39:01 -- setup/common.sh@18 -- # local node= 00:04:48.667 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:48.667 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.667 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.667 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.667 00:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.667 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.667 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6523424 kB' 'MemAvailable: 9452940 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497368 kB' 'Inactive: 2753180 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119324 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190628 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102400 kB' 'KernelStack: 6816 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.667 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.667 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.668 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.668 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.669 00:39:01 -- setup/common.sh@33 -- # echo 1024 00:04:48.669 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:48.669 00:39:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.669 00:39:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.669 00:39:01 -- setup/hugepages.sh@27 -- # local node 00:04:48.669 00:39:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.669 00:39:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.669 00:39:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.669 00:39:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.669 00:39:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.669 00:39:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.669 00:39:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.669 00:39:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.669 00:39:01 -- setup/common.sh@18 -- # local node=0 00:04:48.669 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:48.669 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.669 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.669 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.669 00:39:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.669 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.669 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.669 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.669 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6523440 kB' 'MemUsed: 5715680 kB' 'SwapCached: 0 kB' 'Active: 497376 kB' 'Inactive: 2753180 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3132836 kB' 'Mapped: 50948 kB' 'AnonPages: 119324 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190616 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.670 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.670 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # continue 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.671 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.671 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.671 00:39:01 -- setup/common.sh@33 -- # echo 0 00:04:48.671 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:48.671 00:39:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.671 00:39:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.671 00:39:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.671 00:39:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.671 00:39:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.671 node0=1024 expecting 1024 00:04:48.671 00:39:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.671 00:04:48.671 real 0m0.597s 00:04:48.671 user 0m0.299s 00:04:48.671 sys 0m0.302s 00:04:48.671 ************************************ 00:04:48.671 END TEST even_2G_alloc 00:04:48.671 ************************************ 00:04:48.671 00:39:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.671 00:39:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.671 00:39:01 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:48.671 00:39:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.671 00:39:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.671 00:39:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.671 ************************************ 00:04:48.671 START TEST odd_alloc 00:04:48.671 ************************************ 00:04:48.671 00:39:01 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:48.671 00:39:01 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:48.671 00:39:01 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:48.671 00:39:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.671 00:39:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.930 00:39:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:48.930 00:39:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.930 00:39:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.930 00:39:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.930 00:39:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:48.930 00:39:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.930 00:39:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.930 00:39:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.930 00:39:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.930 00:39:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.930 00:39:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.930 00:39:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:48.930 00:39:01 -- setup/hugepages.sh@83 -- # : 0 00:04:48.930 00:39:01 -- setup/hugepages.sh@84 -- # : 0 00:04:48.930 00:39:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.930 00:39:01 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:48.930 00:39:01 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:48.930 00:39:01 -- setup/hugepages.sh@160 -- # setup output 00:04:48.930 00:39:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.930 00:39:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.193 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.193 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.193 00:39:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:49.193 00:39:01 -- setup/hugepages.sh@89 -- # local node 00:04:49.193 00:39:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.193 00:39:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.193 00:39:01 -- setup/hugepages.sh@92 -- # local surp 00:04:49.193 00:39:01 -- setup/hugepages.sh@93 -- # local resv 00:04:49.193 00:39:01 -- setup/hugepages.sh@94 -- # local anon 00:04:49.193 00:39:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.193 00:39:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.193 00:39:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.193 00:39:01 -- setup/common.sh@18 -- # local node= 00:04:49.193 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:49.193 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.193 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.193 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.193 00:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.193 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.193 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.193 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6524852 kB' 'MemAvailable: 9454368 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497556 kB' 'Inactive: 2753180 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190580 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102352 kB' 'KernelStack: 6784 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.194 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.194 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.195 00:39:01 -- setup/common.sh@33 -- # echo 0 00:04:49.195 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:49.195 00:39:01 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.195 00:39:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.195 00:39:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.195 00:39:01 -- setup/common.sh@18 -- # local node= 00:04:49.195 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:49.195 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.195 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.195 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.195 00:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.195 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.195 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6524852 kB' 'MemAvailable: 9454368 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497492 kB' 'Inactive: 2753180 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119284 kB' 'Mapped: 50896 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190588 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102360 kB' 'KernelStack: 6808 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.195 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.195 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.196 00:39:01 -- setup/common.sh@33 -- # echo 0 00:04:49.196 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:49.196 00:39:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.196 00:39:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.196 00:39:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.196 00:39:01 -- setup/common.sh@18 -- # local node= 00:04:49.196 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:49.196 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.196 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.196 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.196 00:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.196 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.196 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6524852 kB' 'MemAvailable: 9454368 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497516 kB' 'Inactive: 2753180 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119304 kB' 'Mapped: 50896 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190588 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102360 kB' 'KernelStack: 6808 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.196 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.196 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.197 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.197 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.198 00:39:01 -- setup/common.sh@33 -- # echo 0 00:04:49.198 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:49.198 00:39:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.198 nr_hugepages=1025 00:04:49.198 00:39:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:49.198 resv_hugepages=0 00:04:49.198 00:39:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.198 surplus_hugepages=0 00:04:49.198 anon_hugepages=0 00:04:49.198 00:39:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.198 00:39:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.198 00:39:01 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.198 00:39:01 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:49.198 00:39:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.198 00:39:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.198 00:39:01 -- setup/common.sh@18 -- # local node= 00:04:49.198 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:49.198 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.198 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.198 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.198 00:39:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.198 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.198 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6524852 kB' 'MemAvailable: 9454368 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497556 kB' 'Inactive: 2753180 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119380 kB' 'Mapped: 50896 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190588 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102360 kB' 'KernelStack: 6824 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.198 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.198 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.456 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.456 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.457 00:39:01 -- setup/common.sh@33 -- # echo 1025 00:04:49.457 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:49.457 00:39:01 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.457 00:39:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.457 00:39:01 -- setup/hugepages.sh@27 -- # local node 00:04:49.457 00:39:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.457 00:39:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:49.457 00:39:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.457 00:39:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.457 00:39:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.457 00:39:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.457 00:39:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.457 00:39:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.457 00:39:01 -- setup/common.sh@18 -- # local node=0 00:04:49.457 00:39:01 -- setup/common.sh@19 -- # local var val 00:04:49.457 00:39:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.457 00:39:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.457 00:39:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.457 00:39:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.457 00:39:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.457 00:39:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6524852 kB' 'MemUsed: 5714268 kB' 'SwapCached: 0 kB' 'Active: 497316 kB' 'Inactive: 2753180 kB' 'Active(anon): 128148 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3132836 kB' 'Mapped: 50896 kB' 'AnonPages: 119112 kB' 'Shmem: 10488 kB' 'KernelStack: 6808 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190588 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.457 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.457 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # continue 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.458 00:39:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.458 00:39:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.458 00:39:01 -- setup/common.sh@33 -- # echo 0 00:04:49.458 00:39:01 -- setup/common.sh@33 -- # return 0 00:04:49.458 00:39:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.458 00:39:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.458 node0=1025 expecting 1025 00:04:49.458 ************************************ 00:04:49.458 END TEST odd_alloc 00:04:49.458 ************************************ 00:04:49.458 00:39:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.458 00:39:01 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:49.458 00:39:01 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:49.458 00:04:49.458 real 0m0.588s 00:04:49.458 user 0m0.277s 00:04:49.458 sys 0m0.317s 00:04:49.458 00:39:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.458 00:39:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.458 00:39:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:49.458 00:39:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.458 00:39:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.458 00:39:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.458 ************************************ 00:04:49.458 START TEST custom_alloc 00:04:49.458 ************************************ 00:04:49.458 00:39:01 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:49.458 00:39:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:49.458 00:39:01 -- setup/hugepages.sh@169 -- # local node 00:04:49.458 00:39:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:49.458 00:39:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:49.458 00:39:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:49.458 00:39:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:49.458 00:39:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:49.458 00:39:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:49.458 00:39:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.458 00:39:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.458 00:39:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.458 00:39:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:49.458 00:39:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.458 00:39:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.458 00:39:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.458 00:39:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:49.458 00:39:01 -- setup/hugepages.sh@83 -- # : 0 00:04:49.458 00:39:01 -- setup/hugepages.sh@84 -- # : 0 00:04:49.458 00:39:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:49.458 00:39:01 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:49.458 00:39:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:49.458 00:39:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:49.458 00:39:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.458 00:39:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.458 00:39:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:49.458 00:39:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.458 00:39:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.458 00:39:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.458 00:39:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:49.458 00:39:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:49.458 00:39:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:49.458 00:39:01 -- setup/hugepages.sh@78 -- # return 0 00:04:49.458 00:39:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:49.458 00:39:01 -- setup/hugepages.sh@187 -- # setup output 00:04:49.458 00:39:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.458 00:39:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.716 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.716 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.977 00:39:02 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:49.977 00:39:02 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:49.977 00:39:02 -- setup/hugepages.sh@89 -- # local node 00:04:49.977 00:39:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.977 00:39:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.977 00:39:02 -- setup/hugepages.sh@92 -- # local surp 00:04:49.977 00:39:02 -- setup/hugepages.sh@93 -- # local resv 00:04:49.977 00:39:02 -- setup/hugepages.sh@94 -- # local anon 00:04:49.977 00:39:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.977 00:39:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.977 00:39:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.977 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:49.977 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:49.977 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.977 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.977 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.977 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.977 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.977 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7575528 kB' 'MemAvailable: 10505044 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497732 kB' 'Inactive: 2753180 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 51112 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190552 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102324 kB' 'KernelStack: 6840 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.977 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.978 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:49.978 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:49.978 00:39:02 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.978 00:39:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.978 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.978 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:49.978 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:49.978 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.978 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.978 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.978 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.978 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.978 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7575968 kB' 'MemAvailable: 10505484 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497764 kB' 'Inactive: 2753180 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 51112 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190560 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102332 kB' 'KernelStack: 6872 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.979 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:49.979 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:49.979 00:39:02 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.979 00:39:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.979 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.979 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:49.979 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:49.979 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.979 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.979 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.979 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.979 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.979 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7576212 kB' 'MemAvailable: 10505728 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497324 kB' 'Inactive: 2753180 kB' 'Active(anon): 128156 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119236 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190568 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102340 kB' 'KernelStack: 6800 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.980 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.981 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:49.981 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:49.981 00:39:02 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.981 nr_hugepages=512 00:04:49.981 00:39:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:49.981 resv_hugepages=0 00:04:49.981 00:39:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.981 surplus_hugepages=0 00:04:49.981 00:39:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.981 anon_hugepages=0 00:04:49.981 00:39:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.981 00:39:02 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.981 00:39:02 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:49.981 00:39:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.981 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.981 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:49.981 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:49.981 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.981 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.981 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.981 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.981 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.981 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7576212 kB' 'MemAvailable: 10505728 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497556 kB' 'Inactive: 2753180 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119472 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190572 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102344 kB' 'KernelStack: 6832 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.982 00:39:02 -- setup/common.sh@33 -- # echo 512 00:04:49.982 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:49.982 00:39:02 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.982 00:39:02 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.982 00:39:02 -- setup/hugepages.sh@27 -- # local node 00:04:49.982 00:39:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.982 00:39:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.982 00:39:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.982 00:39:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.982 00:39:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.982 00:39:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.982 00:39:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.982 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.982 00:39:02 -- setup/common.sh@18 -- # local node=0 00:04:49.982 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:49.982 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.983 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.983 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.983 00:39:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.983 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.983 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7576212 kB' 'MemUsed: 4662908 kB' 'SwapCached: 0 kB' 'Active: 496968 kB' 'Inactive: 2753180 kB' 'Active(anon): 127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132836 kB' 'Mapped: 50948 kB' 'AnonPages: 118928 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190564 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.983 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.983 00:39:02 -- setup/common.sh@32 -- # continue 00:04:49.984 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.984 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.984 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.984 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:49.984 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:49.984 00:39:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.984 00:39:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.984 00:39:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.984 00:39:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.984 node0=512 expecting 512 00:04:49.984 00:39:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.984 00:39:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.984 00:04:49.984 real 0m0.581s 00:04:49.984 user 0m0.294s 00:04:49.984 sys 0m0.323s 00:04:49.984 00:39:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.984 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.984 ************************************ 00:04:49.984 END TEST custom_alloc 00:04:49.984 ************************************ 00:04:49.984 00:39:02 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:49.984 00:39:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.984 00:39:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.984 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.984 ************************************ 00:04:49.984 START TEST no_shrink_alloc 00:04:49.984 ************************************ 00:04:49.984 00:39:02 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:49.984 00:39:02 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:49.984 00:39:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.984 00:39:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.984 00:39:02 -- setup/hugepages.sh@51 -- # shift 00:04:49.984 00:39:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.984 00:39:02 -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.984 00:39:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.984 00:39:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.984 00:39:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.984 00:39:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.984 00:39:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.984 00:39:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.984 00:39:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.984 00:39:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.984 00:39:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.984 00:39:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.984 00:39:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.984 00:39:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.984 00:39:02 -- setup/hugepages.sh@73 -- # return 0 00:04:49.984 00:39:02 -- setup/hugepages.sh@198 -- # setup output 00:04:49.984 00:39:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.984 00:39:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.554 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.554 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.554 00:39:02 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:50.554 00:39:02 -- setup/hugepages.sh@89 -- # local node 00:04:50.554 00:39:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.554 00:39:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.554 00:39:02 -- setup/hugepages.sh@92 -- # local surp 00:04:50.554 00:39:02 -- setup/hugepages.sh@93 -- # local resv 00:04:50.554 00:39:02 -- setup/hugepages.sh@94 -- # local anon 00:04:50.554 00:39:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.554 00:39:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.554 00:39:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.554 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:50.554 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:50.554 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.554 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.554 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.554 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.554 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.554 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522336 kB' 'MemAvailable: 9451852 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497984 kB' 'Inactive: 2753180 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 51120 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190556 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102328 kB' 'KernelStack: 6808 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.554 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.554 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.555 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:50.555 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:50.555 00:39:02 -- setup/hugepages.sh@97 -- # anon=0 00:04:50.555 00:39:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.555 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.555 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:50.555 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:50.555 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.555 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.555 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.555 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.555 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.555 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522336 kB' 'MemAvailable: 9451852 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497608 kB' 'Inactive: 2753180 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 51112 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190560 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102332 kB' 'KernelStack: 6776 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.555 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.555 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.556 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.556 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.557 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:50.557 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:50.557 00:39:02 -- setup/hugepages.sh@99 -- # surp=0 00:04:50.557 00:39:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.557 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.557 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:50.557 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:50.557 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.557 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.557 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.557 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.557 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.557 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522336 kB' 'MemAvailable: 9451852 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497548 kB' 'Inactive: 2753180 kB' 'Active(anon): 128380 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119444 kB' 'Mapped: 51000 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190568 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102340 kB' 'KernelStack: 6768 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.557 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.557 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.558 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:50.558 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:50.558 00:39:02 -- setup/hugepages.sh@100 -- # resv=0 00:04:50.558 00:39:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.558 nr_hugepages=1024 00:04:50.558 resv_hugepages=0 00:04:50.558 00:39:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.558 surplus_hugepages=0 00:04:50.558 00:39:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.558 anon_hugepages=0 00:04:50.558 00:39:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.558 00:39:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.558 00:39:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.558 00:39:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.558 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.558 00:39:02 -- setup/common.sh@18 -- # local node= 00:04:50.558 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:50.558 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.558 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.558 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.558 00:39:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.558 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.558 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522336 kB' 'MemAvailable: 9451852 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497492 kB' 'Inactive: 2753180 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 51988 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190576 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102348 kB' 'KernelStack: 6864 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.558 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.558 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.559 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.559 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.560 00:39:02 -- setup/common.sh@33 -- # echo 1024 00:04:50.560 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:50.560 00:39:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.560 00:39:02 -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.560 00:39:02 -- setup/hugepages.sh@27 -- # local node 00:04:50.560 00:39:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.560 00:39:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.560 00:39:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.560 00:39:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.560 00:39:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.560 00:39:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.560 00:39:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.560 00:39:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.560 00:39:02 -- setup/common.sh@18 -- # local node=0 00:04:50.560 00:39:02 -- setup/common.sh@19 -- # local var val 00:04:50.560 00:39:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.560 00:39:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.560 00:39:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.560 00:39:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.560 00:39:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.560 00:39:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6522588 kB' 'MemUsed: 5716532 kB' 'SwapCached: 0 kB' 'Active: 497668 kB' 'Inactive: 2753180 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132836 kB' 'Mapped: 50948 kB' 'AnonPages: 119724 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190568 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.560 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.560 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # continue 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.561 00:39:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.561 00:39:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.561 00:39:02 -- setup/common.sh@33 -- # echo 0 00:04:50.561 00:39:02 -- setup/common.sh@33 -- # return 0 00:04:50.561 00:39:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.561 00:39:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.561 00:39:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.561 00:39:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.561 node0=1024 expecting 1024 00:04:50.561 00:39:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.561 00:39:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.561 00:39:02 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:50.561 00:39:02 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:50.561 00:39:02 -- setup/hugepages.sh@202 -- # setup output 00:04:50.561 00:39:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.561 00:39:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.081 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.081 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.081 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:51.081 00:39:03 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:51.081 00:39:03 -- setup/hugepages.sh@89 -- # local node 00:04:51.081 00:39:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.081 00:39:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.081 00:39:03 -- setup/hugepages.sh@92 -- # local surp 00:04:51.081 00:39:03 -- setup/hugepages.sh@93 -- # local resv 00:04:51.081 00:39:03 -- setup/hugepages.sh@94 -- # local anon 00:04:51.081 00:39:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.081 00:39:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.081 00:39:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.081 00:39:03 -- setup/common.sh@18 -- # local node= 00:04:51.081 00:39:03 -- setup/common.sh@19 -- # local var val 00:04:51.081 00:39:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.081 00:39:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.081 00:39:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.081 00:39:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.081 00:39:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.081 00:39:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.081 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.081 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6524832 kB' 'MemAvailable: 9454348 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497788 kB' 'Inactive: 2753180 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119476 kB' 'Mapped: 51060 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190584 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102356 kB' 'KernelStack: 6872 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.082 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.082 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.083 00:39:03 -- setup/common.sh@33 -- # echo 0 00:04:51.083 00:39:03 -- setup/common.sh@33 -- # return 0 00:04:51.083 00:39:03 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.083 00:39:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.083 00:39:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.083 00:39:03 -- setup/common.sh@18 -- # local node= 00:04:51.083 00:39:03 -- setup/common.sh@19 -- # local var val 00:04:51.083 00:39:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.083 00:39:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.083 00:39:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.083 00:39:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.083 00:39:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.083 00:39:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6525120 kB' 'MemAvailable: 9454636 kB' 'Buffers: 2684 kB' 'Cached: 3130152 kB' 'SwapCached: 0 kB' 'Active: 497820 kB' 'Inactive: 2753180 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119252 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190584 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102356 kB' 'KernelStack: 6816 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.083 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.083 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.084 00:39:03 -- setup/common.sh@33 -- # echo 0 00:04:51.084 00:39:03 -- setup/common.sh@33 -- # return 0 00:04:51.084 00:39:03 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.084 00:39:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.084 00:39:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.084 00:39:03 -- setup/common.sh@18 -- # local node= 00:04:51.084 00:39:03 -- setup/common.sh@19 -- # local var val 00:04:51.084 00:39:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.084 00:39:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.084 00:39:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.084 00:39:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.084 00:39:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.084 00:39:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6525120 kB' 'MemAvailable: 9454640 kB' 'Buffers: 2684 kB' 'Cached: 3130156 kB' 'SwapCached: 0 kB' 'Active: 497556 kB' 'Inactive: 2753184 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119060 kB' 'Mapped: 50896 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190580 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102352 kB' 'KernelStack: 6832 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.084 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.084 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.085 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.085 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.086 00:39:03 -- setup/common.sh@33 -- # echo 0 00:04:51.086 00:39:03 -- setup/common.sh@33 -- # return 0 00:04:51.086 00:39:03 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.086 nr_hugepages=1024 00:04:51.086 00:39:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.086 resv_hugepages=0 00:04:51.086 00:39:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.086 surplus_hugepages=0 00:04:51.086 00:39:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.086 anon_hugepages=0 00:04:51.086 00:39:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.086 00:39:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.086 00:39:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.086 00:39:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.086 00:39:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.086 00:39:03 -- setup/common.sh@18 -- # local node= 00:04:51.086 00:39:03 -- setup/common.sh@19 -- # local var val 00:04:51.086 00:39:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.086 00:39:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.086 00:39:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.086 00:39:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.086 00:39:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.086 00:39:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6525120 kB' 'MemAvailable: 9454640 kB' 'Buffers: 2684 kB' 'Cached: 3130156 kB' 'SwapCached: 0 kB' 'Active: 497344 kB' 'Inactive: 2753184 kB' 'Active(anon): 128176 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190576 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102348 kB' 'KernelStack: 6816 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.086 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.086 00:39:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.087 00:39:03 -- setup/common.sh@33 -- # echo 1024 00:04:51.087 00:39:03 -- setup/common.sh@33 -- # return 0 00:04:51.087 00:39:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.087 00:39:03 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.087 00:39:03 -- setup/hugepages.sh@27 -- # local node 00:04:51.087 00:39:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.087 00:39:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.087 00:39:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.087 00:39:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.087 00:39:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.087 00:39:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.087 00:39:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.087 00:39:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.087 00:39:03 -- setup/common.sh@18 -- # local node=0 00:04:51.087 00:39:03 -- setup/common.sh@19 -- # local var val 00:04:51.087 00:39:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.087 00:39:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.087 00:39:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.087 00:39:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.087 00:39:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.087 00:39:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6525120 kB' 'MemUsed: 5714000 kB' 'SwapCached: 0 kB' 'Active: 497096 kB' 'Inactive: 2753184 kB' 'Active(anon): 127928 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132840 kB' 'Mapped: 50948 kB' 'AnonPages: 119092 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190576 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.087 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.087 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # continue 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.088 00:39:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.088 00:39:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.088 00:39:03 -- setup/common.sh@33 -- # echo 0 00:04:51.088 00:39:03 -- setup/common.sh@33 -- # return 0 00:04:51.088 00:39:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.088 00:39:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.088 00:39:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.088 00:39:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.088 node0=1024 expecting 1024 00:04:51.088 00:39:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.088 00:39:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.088 00:04:51.088 real 0m1.072s 00:04:51.088 user 0m0.537s 00:04:51.088 sys 0m0.608s 00:04:51.088 00:39:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.088 00:39:03 -- common/autotest_common.sh@10 -- # set +x 00:04:51.088 ************************************ 00:04:51.088 END TEST no_shrink_alloc 00:04:51.088 ************************************ 00:04:51.088 00:39:03 -- setup/hugepages.sh@217 -- # clear_hp 00:04:51.088 00:39:03 -- setup/hugepages.sh@37 -- # local node hp 00:04:51.088 00:39:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.088 00:39:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.088 00:39:03 -- setup/hugepages.sh@41 -- # echo 0 00:04:51.088 00:39:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.088 00:39:03 -- setup/hugepages.sh@41 -- # echo 0 00:04:51.088 00:39:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:51.088 00:39:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:51.088 00:04:51.088 real 0m5.010s 00:04:51.088 user 0m2.449s 00:04:51.088 sys 0m2.634s 00:04:51.088 00:39:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.088 00:39:03 -- common/autotest_common.sh@10 -- # set +x 00:04:51.088 ************************************ 00:04:51.088 END TEST hugepages 00:04:51.088 ************************************ 00:04:51.348 00:39:03 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:51.348 00:39:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.348 00:39:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.348 00:39:03 -- common/autotest_common.sh@10 -- # set +x 00:04:51.348 ************************************ 00:04:51.348 START TEST driver 00:04:51.348 ************************************ 00:04:51.348 00:39:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:51.348 * Looking for test storage... 00:04:51.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.348 00:39:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.348 00:39:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.348 00:39:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.348 00:39:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.348 00:39:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.348 00:39:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.348 00:39:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.348 00:39:03 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.348 00:39:03 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.348 00:39:03 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.348 00:39:03 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.348 00:39:03 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.348 00:39:03 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.348 00:39:03 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.348 00:39:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.348 00:39:03 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.348 00:39:03 -- scripts/common.sh@344 -- # : 1 00:04:51.348 00:39:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.348 00:39:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.348 00:39:03 -- scripts/common.sh@364 -- # decimal 1 00:04:51.348 00:39:03 -- scripts/common.sh@352 -- # local d=1 00:04:51.348 00:39:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.348 00:39:03 -- scripts/common.sh@354 -- # echo 1 00:04:51.348 00:39:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.348 00:39:03 -- scripts/common.sh@365 -- # decimal 2 00:04:51.348 00:39:03 -- scripts/common.sh@352 -- # local d=2 00:04:51.348 00:39:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.348 00:39:03 -- scripts/common.sh@354 -- # echo 2 00:04:51.348 00:39:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.348 00:39:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.348 00:39:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.348 00:39:03 -- scripts/common.sh@367 -- # return 0 00:04:51.348 00:39:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.348 00:39:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.348 --rc genhtml_branch_coverage=1 00:04:51.348 --rc genhtml_function_coverage=1 00:04:51.348 --rc genhtml_legend=1 00:04:51.348 --rc geninfo_all_blocks=1 00:04:51.348 --rc geninfo_unexecuted_blocks=1 00:04:51.348 00:04:51.348 ' 00:04:51.348 00:39:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.348 --rc genhtml_branch_coverage=1 00:04:51.348 --rc genhtml_function_coverage=1 00:04:51.348 --rc genhtml_legend=1 00:04:51.348 --rc geninfo_all_blocks=1 00:04:51.348 --rc geninfo_unexecuted_blocks=1 00:04:51.348 00:04:51.348 ' 00:04:51.348 00:39:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.348 --rc genhtml_branch_coverage=1 00:04:51.348 --rc genhtml_function_coverage=1 00:04:51.348 --rc genhtml_legend=1 00:04:51.348 --rc geninfo_all_blocks=1 00:04:51.348 --rc geninfo_unexecuted_blocks=1 00:04:51.348 00:04:51.348 ' 00:04:51.348 00:39:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.348 --rc genhtml_branch_coverage=1 00:04:51.348 --rc genhtml_function_coverage=1 00:04:51.348 --rc genhtml_legend=1 00:04:51.348 --rc geninfo_all_blocks=1 00:04:51.348 --rc geninfo_unexecuted_blocks=1 00:04:51.348 00:04:51.348 ' 00:04:51.348 00:39:03 -- setup/driver.sh@68 -- # setup reset 00:04:51.348 00:39:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.348 00:39:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.916 00:39:04 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:51.916 00:39:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.916 00:39:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.916 00:39:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.916 ************************************ 00:04:51.916 START TEST guess_driver 00:04:51.916 ************************************ 00:04:51.916 00:39:04 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:51.916 00:39:04 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:51.916 00:39:04 -- setup/driver.sh@47 -- # local fail=0 00:04:51.916 00:39:04 -- setup/driver.sh@49 -- # pick_driver 00:04:51.916 00:39:04 -- setup/driver.sh@36 -- # vfio 00:04:51.916 00:39:04 -- setup/driver.sh@21 -- # local iommu_grups 00:04:51.916 00:39:04 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:51.916 00:39:04 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:51.916 00:39:04 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:51.916 00:39:04 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:51.916 00:39:04 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:51.916 00:39:04 -- setup/driver.sh@32 -- # return 1 00:04:51.916 00:39:04 -- setup/driver.sh@38 -- # uio 00:04:51.916 00:39:04 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:51.916 00:39:04 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:51.916 00:39:04 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:51.916 00:39:04 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:51.916 00:39:04 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:51.916 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:51.916 00:39:04 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:51.916 Looking for driver=uio_pci_generic 00:04:51.916 00:39:04 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:51.916 00:39:04 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:51.916 00:39:04 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:51.916 00:39:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.916 00:39:04 -- setup/driver.sh@45 -- # setup output config 00:04:51.916 00:39:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.916 00:39:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.851 00:39:05 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:52.851 00:39:05 -- setup/driver.sh@58 -- # continue 00:04:52.851 00:39:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.851 00:39:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:52.851 00:39:05 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:52.851 00:39:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.851 00:39:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:52.851 00:39:05 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:52.851 00:39:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.851 00:39:05 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:52.851 00:39:05 -- setup/driver.sh@65 -- # setup reset 00:04:52.851 00:39:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.851 00:39:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.418 00:04:53.418 real 0m1.477s 00:04:53.418 user 0m0.594s 00:04:53.418 sys 0m0.892s 00:04:53.418 00:39:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.418 00:39:05 -- common/autotest_common.sh@10 -- # set +x 00:04:53.418 ************************************ 00:04:53.418 END TEST guess_driver 00:04:53.418 ************************************ 00:04:53.418 ************************************ 00:04:53.418 END TEST driver 00:04:53.418 ************************************ 00:04:53.418 00:04:53.418 real 0m2.288s 00:04:53.418 user 0m0.931s 00:04:53.418 sys 0m1.429s 00:04:53.418 00:39:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.418 00:39:05 -- common/autotest_common.sh@10 -- # set +x 00:04:53.675 00:39:05 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:53.675 00:39:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.675 00:39:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.675 00:39:05 -- common/autotest_common.sh@10 -- # set +x 00:04:53.675 ************************************ 00:04:53.675 START TEST devices 00:04:53.675 ************************************ 00:04:53.675 00:39:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:53.675 * Looking for test storage... 00:04:53.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:53.675 00:39:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.675 00:39:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.675 00:39:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.675 00:39:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.675 00:39:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.675 00:39:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.675 00:39:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.675 00:39:06 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.675 00:39:06 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.675 00:39:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.675 00:39:06 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.675 00:39:06 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.676 00:39:06 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.676 00:39:06 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.676 00:39:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.676 00:39:06 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.676 00:39:06 -- scripts/common.sh@344 -- # : 1 00:04:53.676 00:39:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.676 00:39:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.676 00:39:06 -- scripts/common.sh@364 -- # decimal 1 00:04:53.676 00:39:06 -- scripts/common.sh@352 -- # local d=1 00:04:53.676 00:39:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.676 00:39:06 -- scripts/common.sh@354 -- # echo 1 00:04:53.676 00:39:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.676 00:39:06 -- scripts/common.sh@365 -- # decimal 2 00:04:53.676 00:39:06 -- scripts/common.sh@352 -- # local d=2 00:04:53.676 00:39:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.676 00:39:06 -- scripts/common.sh@354 -- # echo 2 00:04:53.676 00:39:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.676 00:39:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.676 00:39:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.676 00:39:06 -- scripts/common.sh@367 -- # return 0 00:04:53.676 00:39:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.676 00:39:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.676 --rc genhtml_branch_coverage=1 00:04:53.676 --rc genhtml_function_coverage=1 00:04:53.676 --rc genhtml_legend=1 00:04:53.676 --rc geninfo_all_blocks=1 00:04:53.676 --rc geninfo_unexecuted_blocks=1 00:04:53.676 00:04:53.676 ' 00:04:53.676 00:39:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.676 --rc genhtml_branch_coverage=1 00:04:53.676 --rc genhtml_function_coverage=1 00:04:53.676 --rc genhtml_legend=1 00:04:53.676 --rc geninfo_all_blocks=1 00:04:53.676 --rc geninfo_unexecuted_blocks=1 00:04:53.676 00:04:53.676 ' 00:04:53.676 00:39:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.676 --rc genhtml_branch_coverage=1 00:04:53.676 --rc genhtml_function_coverage=1 00:04:53.676 --rc genhtml_legend=1 00:04:53.676 --rc geninfo_all_blocks=1 00:04:53.676 --rc geninfo_unexecuted_blocks=1 00:04:53.676 00:04:53.676 ' 00:04:53.676 00:39:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.676 --rc genhtml_branch_coverage=1 00:04:53.676 --rc genhtml_function_coverage=1 00:04:53.676 --rc genhtml_legend=1 00:04:53.676 --rc geninfo_all_blocks=1 00:04:53.676 --rc geninfo_unexecuted_blocks=1 00:04:53.676 00:04:53.676 ' 00:04:53.676 00:39:06 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.676 00:39:06 -- setup/devices.sh@192 -- # setup reset 00:04:53.676 00:39:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.676 00:39:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.608 00:39:06 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:54.608 00:39:06 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:54.608 00:39:06 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:54.608 00:39:06 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:54.608 00:39:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:54.608 00:39:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:54.608 00:39:06 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:54.608 00:39:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:54.608 00:39:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:54.608 00:39:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:54.608 00:39:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:54.608 00:39:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:54.608 00:39:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:54.608 00:39:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:54.608 00:39:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:54.608 00:39:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:54.608 00:39:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:54.608 00:39:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:54.608 00:39:06 -- setup/devices.sh@196 -- # blocks=() 00:04:54.608 00:39:06 -- setup/devices.sh@196 -- # declare -a blocks 00:04:54.608 00:39:06 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:54.608 00:39:06 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:54.608 00:39:06 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:54.608 00:39:06 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.608 00:39:06 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:54.608 00:39:06 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:54.608 00:39:06 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:54.608 00:39:06 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:54.608 00:39:06 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:54.608 00:39:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:54.608 00:39:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:54.608 No valid GPT data, bailing 00:04:54.608 00:39:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.608 00:39:07 -- scripts/common.sh@393 -- # pt= 00:04:54.608 00:39:07 -- scripts/common.sh@394 -- # return 1 00:04:54.608 00:39:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:54.609 00:39:07 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:54.609 00:39:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:54.609 00:39:07 -- setup/common.sh@80 -- # echo 5368709120 00:04:54.609 00:39:07 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:54.609 00:39:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.609 00:39:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:54.609 00:39:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.609 00:39:07 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:54.609 00:39:07 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:54.609 00:39:07 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:54.609 00:39:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:54.609 00:39:07 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:54.609 00:39:07 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:54.609 00:39:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:54.609 No valid GPT data, bailing 00:04:54.609 00:39:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:54.868 00:39:07 -- scripts/common.sh@393 -- # pt= 00:04:54.868 00:39:07 -- scripts/common.sh@394 -- # return 1 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:54.868 00:39:07 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:54.868 00:39:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:54.868 00:39:07 -- setup/common.sh@80 -- # echo 4294967296 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:54.868 00:39:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.868 00:39:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:54.868 00:39:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.868 00:39:07 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:54.868 00:39:07 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:54.868 00:39:07 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:54.868 00:39:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:54.868 00:39:07 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:54.868 00:39:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:54.868 No valid GPT data, bailing 00:04:54.868 00:39:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:54.868 00:39:07 -- scripts/common.sh@393 -- # pt= 00:04:54.868 00:39:07 -- scripts/common.sh@394 -- # return 1 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:54.868 00:39:07 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:54.868 00:39:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:54.868 00:39:07 -- setup/common.sh@80 -- # echo 4294967296 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:54.868 00:39:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.868 00:39:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:54.868 00:39:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.868 00:39:07 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:54.868 00:39:07 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:54.868 00:39:07 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:54.868 00:39:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:54.868 00:39:07 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:54.868 00:39:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:54.868 No valid GPT data, bailing 00:04:54.868 00:39:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:54.868 00:39:07 -- scripts/common.sh@393 -- # pt= 00:04:54.868 00:39:07 -- scripts/common.sh@394 -- # return 1 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:54.868 00:39:07 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:54.868 00:39:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:54.868 00:39:07 -- setup/common.sh@80 -- # echo 4294967296 00:04:54.868 00:39:07 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:54.868 00:39:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.868 00:39:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:54.868 00:39:07 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:54.868 00:39:07 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:54.868 00:39:07 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:54.868 00:39:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.868 00:39:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.868 00:39:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.868 ************************************ 00:04:54.868 START TEST nvme_mount 00:04:54.868 ************************************ 00:04:54.868 00:39:07 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:54.868 00:39:07 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:54.868 00:39:07 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:54.868 00:39:07 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.868 00:39:07 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.868 00:39:07 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:54.868 00:39:07 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.868 00:39:07 -- setup/common.sh@40 -- # local part_no=1 00:04:54.868 00:39:07 -- setup/common.sh@41 -- # local size=1073741824 00:04:54.868 00:39:07 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.868 00:39:07 -- setup/common.sh@44 -- # parts=() 00:04:54.868 00:39:07 -- setup/common.sh@44 -- # local parts 00:04:54.868 00:39:07 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.868 00:39:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.868 00:39:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.868 00:39:07 -- setup/common.sh@46 -- # (( part++ )) 00:04:54.868 00:39:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.868 00:39:07 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:54.868 00:39:07 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.868 00:39:07 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:56.247 Creating new GPT entries in memory. 00:04:56.247 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.247 other utilities. 00:04:56.247 00:39:08 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.247 00:39:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.247 00:39:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.247 00:39:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.247 00:39:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:57.217 Creating new GPT entries in memory. 00:04:57.217 The operation has completed successfully. 00:04:57.217 00:39:09 -- setup/common.sh@57 -- # (( part++ )) 00:04:57.217 00:39:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.217 00:39:09 -- setup/common.sh@62 -- # wait 65859 00:04:57.217 00:39:09 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.217 00:39:09 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:57.217 00:39:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.217 00:39:09 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:57.217 00:39:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:57.217 00:39:09 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.217 00:39:09 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:57.217 00:39:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:57.217 00:39:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:57.217 00:39:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.217 00:39:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:57.217 00:39:09 -- setup/devices.sh@53 -- # local found=0 00:04:57.217 00:39:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.217 00:39:09 -- setup/devices.sh@56 -- # : 00:04:57.217 00:39:09 -- setup/devices.sh@59 -- # local pci status 00:04:57.217 00:39:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.217 00:39:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:57.217 00:39:09 -- setup/devices.sh@47 -- # setup output config 00:04:57.217 00:39:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.217 00:39:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.218 00:39:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.218 00:39:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:57.218 00:39:09 -- setup/devices.sh@63 -- # found=1 00:04:57.218 00:39:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.218 00:39:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.218 00:39:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.786 00:39:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.786 00:39:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.786 00:39:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.786 00:39:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.786 00:39:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.786 00:39:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:57.786 00:39:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.786 00:39:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.786 00:39:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:57.786 00:39:10 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:57.786 00:39:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.786 00:39:10 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.786 00:39:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.786 00:39:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:57.786 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.786 00:39:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.786 00:39:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.045 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:58.045 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:58.045 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:58.045 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:58.045 00:39:10 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:58.045 00:39:10 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:58.045 00:39:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.045 00:39:10 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:58.045 00:39:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:58.045 00:39:10 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.045 00:39:10 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.045 00:39:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:58.045 00:39:10 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:58.045 00:39:10 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.045 00:39:10 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.045 00:39:10 -- setup/devices.sh@53 -- # local found=0 00:04:58.045 00:39:10 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.045 00:39:10 -- setup/devices.sh@56 -- # : 00:04:58.045 00:39:10 -- setup/devices.sh@59 -- # local pci status 00:04:58.045 00:39:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.045 00:39:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:58.045 00:39:10 -- setup/devices.sh@47 -- # setup output config 00:04:58.045 00:39:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.045 00:39:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.304 00:39:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.304 00:39:10 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:58.304 00:39:10 -- setup/devices.sh@63 -- # found=1 00:04:58.304 00:39:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.304 00:39:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.304 00:39:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.564 00:39:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.564 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.824 00:39:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.824 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.824 00:39:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.824 00:39:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:58.824 00:39:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.824 00:39:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.824 00:39:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.824 00:39:11 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.824 00:39:11 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:58.824 00:39:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:58.824 00:39:11 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:58.824 00:39:11 -- setup/devices.sh@50 -- # local mount_point= 00:04:58.824 00:39:11 -- setup/devices.sh@51 -- # local test_file= 00:04:58.824 00:39:11 -- setup/devices.sh@53 -- # local found=0 00:04:58.824 00:39:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.824 00:39:11 -- setup/devices.sh@59 -- # local pci status 00:04:58.824 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.824 00:39:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:58.824 00:39:11 -- setup/devices.sh@47 -- # setup output config 00:04:58.824 00:39:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.824 00:39:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.084 00:39:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.084 00:39:11 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:59.084 00:39:11 -- setup/devices.sh@63 -- # found=1 00:04:59.084 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.084 00:39:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.084 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.342 00:39:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.342 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.600 00:39:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.600 00:39:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.600 00:39:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.600 00:39:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:59.600 00:39:11 -- setup/devices.sh@68 -- # return 0 00:04:59.600 00:39:11 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:59.600 00:39:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:59.600 00:39:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.600 00:39:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.600 00:39:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.600 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.600 00:04:59.600 real 0m4.706s 00:04:59.600 user 0m1.082s 00:04:59.600 sys 0m1.294s 00:04:59.600 00:39:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.600 00:39:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.600 ************************************ 00:04:59.600 END TEST nvme_mount 00:04:59.600 ************************************ 00:04:59.600 00:39:12 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:59.600 00:39:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.600 00:39:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.600 00:39:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.600 ************************************ 00:04:59.600 START TEST dm_mount 00:04:59.600 ************************************ 00:04:59.600 00:39:12 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:59.600 00:39:12 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:59.600 00:39:12 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:59.600 00:39:12 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:59.600 00:39:12 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:59.600 00:39:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:59.600 00:39:12 -- setup/common.sh@40 -- # local part_no=2 00:04:59.600 00:39:12 -- setup/common.sh@41 -- # local size=1073741824 00:04:59.600 00:39:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:59.600 00:39:12 -- setup/common.sh@44 -- # parts=() 00:04:59.600 00:39:12 -- setup/common.sh@44 -- # local parts 00:04:59.600 00:39:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:59.600 00:39:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.600 00:39:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.600 00:39:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:59.600 00:39:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.600 00:39:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.600 00:39:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:59.601 00:39:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.601 00:39:12 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:59.601 00:39:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:59.601 00:39:12 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:00.975 Creating new GPT entries in memory. 00:05:00.975 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.975 other utilities. 00:05:00.975 00:39:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.975 00:39:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.975 00:39:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.975 00:39:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.975 00:39:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:01.911 Creating new GPT entries in memory. 00:05:01.911 The operation has completed successfully. 00:05:01.911 00:39:14 -- setup/common.sh@57 -- # (( part++ )) 00:05:01.911 00:39:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.911 00:39:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.911 00:39:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.911 00:39:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:02.845 The operation has completed successfully. 00:05:02.845 00:39:15 -- setup/common.sh@57 -- # (( part++ )) 00:05:02.845 00:39:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.845 00:39:15 -- setup/common.sh@62 -- # wait 66318 00:05:02.845 00:39:15 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:02.845 00:39:15 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.845 00:39:15 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:02.845 00:39:15 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:02.845 00:39:15 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:02.845 00:39:15 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.845 00:39:15 -- setup/devices.sh@161 -- # break 00:05:02.845 00:39:15 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.845 00:39:15 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:02.845 00:39:15 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:02.845 00:39:15 -- setup/devices.sh@166 -- # dm=dm-0 00:05:02.845 00:39:15 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:02.845 00:39:15 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:02.845 00:39:15 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.845 00:39:15 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:02.845 00:39:15 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.845 00:39:15 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.845 00:39:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:02.845 00:39:15 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.845 00:39:15 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:02.845 00:39:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:02.845 00:39:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:02.845 00:39:15 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.845 00:39:15 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:02.845 00:39:15 -- setup/devices.sh@53 -- # local found=0 00:05:02.845 00:39:15 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.845 00:39:15 -- setup/devices.sh@56 -- # : 00:05:02.845 00:39:15 -- setup/devices.sh@59 -- # local pci status 00:05:02.845 00:39:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.845 00:39:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:02.845 00:39:15 -- setup/devices.sh@47 -- # setup output config 00:05:02.845 00:39:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.845 00:39:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.103 00:39:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.103 00:39:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:03.103 00:39:15 -- setup/devices.sh@63 -- # found=1 00:05:03.103 00:39:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.103 00:39:15 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.103 00:39:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.361 00:39:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.361 00:39:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.620 00:39:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.620 00:39:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.620 00:39:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.620 00:39:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:03.620 00:39:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.620 00:39:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.620 00:39:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:03.620 00:39:15 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.620 00:39:15 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:03.620 00:39:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:03.620 00:39:15 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:03.620 00:39:15 -- setup/devices.sh@50 -- # local mount_point= 00:05:03.620 00:39:15 -- setup/devices.sh@51 -- # local test_file= 00:05:03.620 00:39:15 -- setup/devices.sh@53 -- # local found=0 00:05:03.620 00:39:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:03.620 00:39:15 -- setup/devices.sh@59 -- # local pci status 00:05:03.620 00:39:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.620 00:39:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:03.620 00:39:16 -- setup/devices.sh@47 -- # setup output config 00:05:03.620 00:39:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.620 00:39:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.878 00:39:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.878 00:39:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:03.878 00:39:16 -- setup/devices.sh@63 -- # found=1 00:05:03.878 00:39:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.878 00:39:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.878 00:39:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.136 00:39:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.136 00:39:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.136 00:39:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.136 00:39:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.395 00:39:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.395 00:39:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.395 00:39:16 -- setup/devices.sh@68 -- # return 0 00:05:04.395 00:39:16 -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.395 00:39:16 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.395 00:39:16 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.395 00:39:16 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.395 00:39:16 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.395 00:39:16 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.395 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.395 00:39:16 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.395 00:39:16 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.395 00:05:04.395 real 0m4.694s 00:05:04.395 user 0m0.729s 00:05:04.395 sys 0m0.893s 00:05:04.395 00:39:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.395 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:04.395 ************************************ 00:05:04.395 END TEST dm_mount 00:05:04.395 ************************************ 00:05:04.395 00:39:16 -- setup/devices.sh@1 -- # cleanup 00:05:04.395 00:39:16 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.395 00:39:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.395 00:39:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.395 00:39:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.395 00:39:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.395 00:39:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.654 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.654 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.654 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.654 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.654 00:39:17 -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.654 00:39:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.654 00:39:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.654 00:39:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.654 00:39:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.654 00:39:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.654 00:39:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.654 00:05:04.654 real 0m11.148s 00:05:04.654 user 0m2.590s 00:05:04.654 sys 0m2.859s 00:05:04.654 ************************************ 00:05:04.654 END TEST devices 00:05:04.654 ************************************ 00:05:04.654 00:39:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.654 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:04.654 00:05:04.654 real 0m23.360s 00:05:04.654 user 0m8.149s 00:05:04.654 sys 0m9.656s 00:05:04.654 00:39:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.654 ************************************ 00:05:04.654 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:04.654 END TEST setup.sh 00:05:04.654 ************************************ 00:05:04.912 00:39:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.912 Hugepages 00:05:04.912 node hugesize free / total 00:05:04.912 node0 1048576kB 0 / 0 00:05:04.912 node0 2048kB 2048 / 2048 00:05:04.912 00:05:04.912 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.171 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:05.171 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:05.171 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:05.171 00:39:17 -- spdk/autotest.sh@128 -- # uname -s 00:05:05.171 00:39:17 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:05.171 00:39:17 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:05.171 00:39:17 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.107 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.107 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.107 00:39:18 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:07.045 00:39:19 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:07.045 00:39:19 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:07.045 00:39:19 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.045 00:39:19 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:07.045 00:39:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:07.045 00:39:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:07.045 00:39:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.304 00:39:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:07.304 00:39:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:07.304 00:39:19 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:07.304 00:39:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:07.304 00:39:19 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.563 Waiting for block devices as requested 00:05:07.563 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.822 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.822 00:39:20 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:07.822 00:39:20 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:07.822 00:39:20 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:07.822 00:39:20 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:07.822 00:39:20 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:07.822 00:39:20 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1552 -- # continue 00:05:07.822 00:39:20 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:07.822 00:39:20 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:07.822 00:39:20 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:07.822 00:39:20 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.822 00:39:20 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:07.822 00:39:20 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:07.822 00:39:20 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:07.822 00:39:20 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:07.822 00:39:20 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:07.822 00:39:20 -- common/autotest_common.sh@1552 -- # continue 00:05:07.822 00:39:20 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:07.822 00:39:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.822 00:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.081 00:39:20 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:08.082 00:39:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.082 00:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.082 00:39:20 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.649 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.907 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.907 00:39:21 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:08.907 00:39:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:08.907 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:08.907 00:39:21 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:08.907 00:39:21 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:08.907 00:39:21 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.907 00:39:21 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:08.907 00:39:21 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:08.907 00:39:21 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:08.907 00:39:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:08.907 00:39:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:08.907 00:39:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.907 00:39:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:08.907 00:39:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:08.907 00:39:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:08.907 00:39:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:08.907 00:39:21 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:08.907 00:39:21 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:08.907 00:39:21 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:08.907 00:39:21 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:08.907 00:39:21 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:08.907 00:39:21 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:08.907 00:39:21 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:08.907 00:39:21 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:08.907 00:39:21 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:08.907 00:39:21 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:08.907 00:39:21 -- common/autotest_common.sh@1588 -- # return 0 00:05:08.907 00:39:21 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:08.907 00:39:21 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:08.907 00:39:21 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:08.907 00:39:21 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:08.907 00:39:21 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:08.907 00:39:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.907 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:08.907 00:39:21 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:08.907 00:39:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.907 00:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.907 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:08.907 ************************************ 00:05:08.907 START TEST env 00:05:08.907 ************************************ 00:05:08.907 00:39:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:09.165 * Looking for test storage... 00:05:09.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:09.165 00:39:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.165 00:39:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.165 00:39:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.165 00:39:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.165 00:39:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.165 00:39:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.165 00:39:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.165 00:39:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.165 00:39:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.165 00:39:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.165 00:39:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.165 00:39:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.165 00:39:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.165 00:39:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.165 00:39:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.165 00:39:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.165 00:39:21 -- scripts/common.sh@344 -- # : 1 00:05:09.165 00:39:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.165 00:39:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.165 00:39:21 -- scripts/common.sh@364 -- # decimal 1 00:05:09.165 00:39:21 -- scripts/common.sh@352 -- # local d=1 00:05:09.165 00:39:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.165 00:39:21 -- scripts/common.sh@354 -- # echo 1 00:05:09.165 00:39:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.165 00:39:21 -- scripts/common.sh@365 -- # decimal 2 00:05:09.165 00:39:21 -- scripts/common.sh@352 -- # local d=2 00:05:09.165 00:39:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.165 00:39:21 -- scripts/common.sh@354 -- # echo 2 00:05:09.165 00:39:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.165 00:39:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.165 00:39:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.165 00:39:21 -- scripts/common.sh@367 -- # return 0 00:05:09.165 00:39:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.165 00:39:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.165 --rc genhtml_branch_coverage=1 00:05:09.165 --rc genhtml_function_coverage=1 00:05:09.165 --rc genhtml_legend=1 00:05:09.165 --rc geninfo_all_blocks=1 00:05:09.165 --rc geninfo_unexecuted_blocks=1 00:05:09.165 00:05:09.165 ' 00:05:09.165 00:39:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.165 --rc genhtml_branch_coverage=1 00:05:09.165 --rc genhtml_function_coverage=1 00:05:09.165 --rc genhtml_legend=1 00:05:09.165 --rc geninfo_all_blocks=1 00:05:09.165 --rc geninfo_unexecuted_blocks=1 00:05:09.165 00:05:09.165 ' 00:05:09.165 00:39:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.165 --rc genhtml_branch_coverage=1 00:05:09.165 --rc genhtml_function_coverage=1 00:05:09.165 --rc genhtml_legend=1 00:05:09.165 --rc geninfo_all_blocks=1 00:05:09.165 --rc geninfo_unexecuted_blocks=1 00:05:09.165 00:05:09.165 ' 00:05:09.165 00:39:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.165 --rc genhtml_branch_coverage=1 00:05:09.165 --rc genhtml_function_coverage=1 00:05:09.165 --rc genhtml_legend=1 00:05:09.165 --rc geninfo_all_blocks=1 00:05:09.165 --rc geninfo_unexecuted_blocks=1 00:05:09.165 00:05:09.165 ' 00:05:09.165 00:39:21 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.165 00:39:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.165 00:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.165 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:09.165 ************************************ 00:05:09.165 START TEST env_memory 00:05:09.165 ************************************ 00:05:09.165 00:39:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.165 00:05:09.165 00:05:09.165 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.165 http://cunit.sourceforge.net/ 00:05:09.165 00:05:09.165 00:05:09.165 Suite: memory 00:05:09.165 Test: alloc and free memory map ...[2024-12-03 00:39:21.652245] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.165 passed 00:05:09.424 Test: mem map translation ...[2024-12-03 00:39:21.683270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.424 [2024-12-03 00:39:21.683308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.424 [2024-12-03 00:39:21.683363] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.424 [2024-12-03 00:39:21.683374] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.424 passed 00:05:09.424 Test: mem map registration ...[2024-12-03 00:39:21.747007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:09.424 [2024-12-03 00:39:21.747055] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:09.424 passed 00:05:09.424 Test: mem map adjacent registrations ...passed 00:05:09.424 00:05:09.424 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.424 suites 1 1 n/a 0 0 00:05:09.424 tests 4 4 4 0 0 00:05:09.424 asserts 152 152 152 0 n/a 00:05:09.424 00:05:09.424 Elapsed time = 0.186 seconds 00:05:09.424 00:05:09.424 real 0m0.205s 00:05:09.425 user 0m0.190s 00:05:09.425 sys 0m0.011s 00:05:09.425 00:39:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.425 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:09.425 ************************************ 00:05:09.425 END TEST env_memory 00:05:09.425 ************************************ 00:05:09.425 00:39:21 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:09.425 00:39:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.425 00:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.425 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:09.425 ************************************ 00:05:09.425 START TEST env_vtophys 00:05:09.425 ************************************ 00:05:09.425 00:39:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:09.425 EAL: lib.eal log level changed from notice to debug 00:05:09.425 EAL: Detected lcore 0 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 1 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 2 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 3 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 4 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 5 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 6 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 7 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 8 as core 0 on socket 0 00:05:09.425 EAL: Detected lcore 9 as core 0 on socket 0 00:05:09.425 EAL: Maximum logical cores by configuration: 128 00:05:09.425 EAL: Detected CPU lcores: 10 00:05:09.425 EAL: Detected NUMA nodes: 1 00:05:09.425 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:09.425 EAL: Detected shared linkage of DPDK 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:09.425 EAL: Registered [vdev] bus. 00:05:09.425 EAL: bus.vdev log level changed from disabled to notice 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:09.425 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:09.425 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:09.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:09.425 EAL: No shared files mode enabled, IPC will be disabled 00:05:09.425 EAL: No shared files mode enabled, IPC is disabled 00:05:09.425 EAL: Selected IOVA mode 'PA' 00:05:09.425 EAL: Probing VFIO support... 00:05:09.425 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:09.425 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:09.425 EAL: Ask a virtual area of 0x2e000 bytes 00:05:09.425 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:09.425 EAL: Setting up physically contiguous memory... 00:05:09.425 EAL: Setting maximum number of open files to 524288 00:05:09.425 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:09.425 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:09.425 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.425 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:09.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.425 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.425 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:09.425 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:09.425 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.425 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:09.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.425 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.425 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:09.425 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:09.425 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.425 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:09.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.425 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.425 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:09.425 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:09.425 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.425 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:09.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.425 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.425 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:09.425 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:09.425 EAL: Hugepages will be freed exactly as allocated. 00:05:09.425 EAL: No shared files mode enabled, IPC is disabled 00:05:09.425 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: TSC frequency is ~2200000 KHz 00:05:09.684 EAL: Main lcore 0 is ready (tid=7f9df3be8a00;cpuset=[0]) 00:05:09.684 EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 0 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.684 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.684 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.684 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:09.684 00:05:09.684 00:05:09.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.684 http://cunit.sourceforge.net/ 00:05:09.684 00:05:09.684 00:05:09.684 Suite: components_suite 00:05:09.684 Test: vtophys_malloc_test ...passed 00:05:09.684 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 4 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.684 EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 4 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.684 EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 4 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.684 EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 4 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.684 EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 4 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.684 EAL: Trying to obtain current memory policy. 00:05:09.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.684 EAL: Restoring previous memory policy: 4 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.684 EAL: request: mp_malloc_sync 00:05:09.684 EAL: No shared files mode enabled, IPC is disabled 00:05:09.684 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.685 EAL: Trying to obtain current memory policy. 00:05:09.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.685 EAL: Restoring previous memory policy: 4 00:05:09.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.685 EAL: request: mp_malloc_sync 00:05:09.685 EAL: No shared files mode enabled, IPC is disabled 00:05:09.685 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.685 EAL: request: mp_malloc_sync 00:05:09.685 EAL: No shared files mode enabled, IPC is disabled 00:05:09.685 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.685 EAL: Trying to obtain current memory policy. 00:05:09.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.944 EAL: Restoring previous memory policy: 4 00:05:09.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.944 EAL: request: mp_malloc_sync 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.944 EAL: request: mp_malloc_sync 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.944 EAL: Trying to obtain current memory policy. 00:05:09.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.944 EAL: Restoring previous memory policy: 4 00:05:09.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.944 EAL: request: mp_malloc_sync 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.220 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.220 EAL: request: mp_malloc_sync 00:05:10.220 EAL: No shared files mode enabled, IPC is disabled 00:05:10.220 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.220 EAL: Trying to obtain current memory policy. 00:05:10.220 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.479 EAL: Restoring previous memory policy: 4 00:05:10.479 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.479 EAL: request: mp_malloc_sync 00:05:10.479 EAL: No shared files mode enabled, IPC is disabled 00:05:10.479 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.738 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.738 passed 00:05:10.738 00:05:10.738 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.738 suites 1 1 n/a 0 0 00:05:10.738 tests 2 2 2 0 0 00:05:10.738 asserts 5274 5274 5274 0 n/a 00:05:10.738 00:05:10.738 Elapsed time = 1.184 seconds 00:05:10.738 EAL: request: mp_malloc_sync 00:05:10.738 EAL: No shared files mode enabled, IPC is disabled 00:05:10.738 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.738 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.738 EAL: request: mp_malloc_sync 00:05:10.738 EAL: No shared files mode enabled, IPC is disabled 00:05:10.738 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.738 EAL: No shared files mode enabled, IPC is disabled 00:05:10.738 EAL: No shared files mode enabled, IPC is disabled 00:05:10.738 EAL: No shared files mode enabled, IPC is disabled 00:05:10.738 00:05:10.738 real 0m1.377s 00:05:10.738 user 0m0.764s 00:05:10.738 sys 0m0.482s 00:05:10.738 00:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.738 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.738 ************************************ 00:05:10.738 END TEST env_vtophys 00:05:10.738 ************************************ 00:05:10.997 00:39:23 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.997 00:39:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.997 00:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.997 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.997 ************************************ 00:05:10.997 START TEST env_pci 00:05:10.997 ************************************ 00:05:10.997 00:39:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.997 00:05:10.997 00:05:10.997 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.997 http://cunit.sourceforge.net/ 00:05:10.997 00:05:10.997 00:05:10.997 Suite: pci 00:05:10.997 Test: pci_hook ...[2024-12-03 00:39:23.307656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67462 has claimed it 00:05:10.997 passed 00:05:10.997 00:05:10.997 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.997 suites 1 1 n/a 0 0 00:05:10.997 tests 1 1 1 0 0 00:05:10.997 asserts 25 25 25 0 n/a 00:05:10.997 00:05:10.997 Elapsed time = 0.002 seconds 00:05:10.997 EAL: Cannot find device (10000:00:01.0) 00:05:10.997 EAL: Failed to attach device on primary process 00:05:10.997 00:05:10.997 real 0m0.020s 00:05:10.997 user 0m0.014s 00:05:10.997 sys 0m0.006s 00:05:10.997 00:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.997 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.997 ************************************ 00:05:10.997 END TEST env_pci 00:05:10.997 ************************************ 00:05:10.997 00:39:23 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.997 00:39:23 -- env/env.sh@15 -- # uname 00:05:10.997 00:39:23 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.997 00:39:23 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.997 00:39:23 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.997 00:39:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:10.997 00:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.997 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.997 ************************************ 00:05:10.997 START TEST env_dpdk_post_init 00:05:10.997 ************************************ 00:05:10.997 00:39:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.997 EAL: Detected CPU lcores: 10 00:05:10.997 EAL: Detected NUMA nodes: 1 00:05:10.997 EAL: Detected shared linkage of DPDK 00:05:10.997 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.997 EAL: Selected IOVA mode 'PA' 00:05:11.256 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.256 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:11.256 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:11.256 Starting DPDK initialization... 00:05:11.256 Starting SPDK post initialization... 00:05:11.256 SPDK NVMe probe 00:05:11.256 Attaching to 0000:00:06.0 00:05:11.256 Attaching to 0000:00:07.0 00:05:11.256 Attached to 0000:00:06.0 00:05:11.256 Attached to 0000:00:07.0 00:05:11.256 Cleaning up... 00:05:11.256 00:05:11.256 real 0m0.176s 00:05:11.256 user 0m0.040s 00:05:11.256 sys 0m0.037s 00:05:11.256 00:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.256 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.256 ************************************ 00:05:11.256 END TEST env_dpdk_post_init 00:05:11.256 ************************************ 00:05:11.256 00:39:23 -- env/env.sh@26 -- # uname 00:05:11.256 00:39:23 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.256 00:39:23 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.256 00:39:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.256 00:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.256 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.256 ************************************ 00:05:11.256 START TEST env_mem_callbacks 00:05:11.256 ************************************ 00:05:11.256 00:39:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.256 EAL: Detected CPU lcores: 10 00:05:11.256 EAL: Detected NUMA nodes: 1 00:05:11.256 EAL: Detected shared linkage of DPDK 00:05:11.256 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.256 EAL: Selected IOVA mode 'PA' 00:05:11.256 00:05:11.256 00:05:11.256 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.256 http://cunit.sourceforge.net/ 00:05:11.256 00:05:11.256 00:05:11.256 Suite: memory 00:05:11.256 Test: test ... 00:05:11.256 register 0x200000200000 2097152 00:05:11.256 malloc 3145728 00:05:11.256 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.256 register 0x200000400000 4194304 00:05:11.256 buf 0x200000500000 len 3145728 PASSED 00:05:11.256 malloc 64 00:05:11.256 buf 0x2000004fff40 len 64 PASSED 00:05:11.256 malloc 4194304 00:05:11.256 register 0x200000800000 6291456 00:05:11.256 buf 0x200000a00000 len 4194304 PASSED 00:05:11.256 free 0x200000500000 3145728 00:05:11.256 free 0x2000004fff40 64 00:05:11.256 unregister 0x200000400000 4194304 PASSED 00:05:11.256 free 0x200000a00000 4194304 00:05:11.256 unregister 0x200000800000 6291456 PASSED 00:05:11.256 malloc 8388608 00:05:11.256 register 0x200000400000 10485760 00:05:11.256 buf 0x200000600000 len 8388608 PASSED 00:05:11.256 free 0x200000600000 8388608 00:05:11.256 unregister 0x200000400000 10485760 PASSED 00:05:11.256 passed 00:05:11.256 00:05:11.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.256 suites 1 1 n/a 0 0 00:05:11.256 tests 1 1 1 0 0 00:05:11.256 asserts 15 15 15 0 n/a 00:05:11.256 00:05:11.256 Elapsed time = 0.009 seconds 00:05:11.256 00:05:11.256 real 0m0.143s 00:05:11.256 user 0m0.024s 00:05:11.256 sys 0m0.018s 00:05:11.256 00:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.256 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.256 ************************************ 00:05:11.256 END TEST env_mem_callbacks 00:05:11.256 ************************************ 00:05:11.515 00:05:11.515 real 0m2.415s 00:05:11.515 user 0m1.239s 00:05:11.515 sys 0m0.817s 00:05:11.515 00:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.515 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.515 ************************************ 00:05:11.515 END TEST env 00:05:11.515 ************************************ 00:05:11.515 00:39:23 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.515 00:39:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.515 00:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.515 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.515 ************************************ 00:05:11.515 START TEST rpc 00:05:11.515 ************************************ 00:05:11.515 00:39:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.515 * Looking for test storage... 00:05:11.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.515 00:39:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.515 00:39:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.515 00:39:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.774 00:39:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.774 00:39:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.774 00:39:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.774 00:39:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.774 00:39:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.774 00:39:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.774 00:39:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.774 00:39:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.774 00:39:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.774 00:39:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.774 00:39:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.774 00:39:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.774 00:39:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.774 00:39:24 -- scripts/common.sh@344 -- # : 1 00:05:11.774 00:39:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.774 00:39:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.774 00:39:24 -- scripts/common.sh@364 -- # decimal 1 00:05:11.774 00:39:24 -- scripts/common.sh@352 -- # local d=1 00:05:11.774 00:39:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.774 00:39:24 -- scripts/common.sh@354 -- # echo 1 00:05:11.774 00:39:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.774 00:39:24 -- scripts/common.sh@365 -- # decimal 2 00:05:11.774 00:39:24 -- scripts/common.sh@352 -- # local d=2 00:05:11.774 00:39:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.774 00:39:24 -- scripts/common.sh@354 -- # echo 2 00:05:11.774 00:39:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.774 00:39:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.774 00:39:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.774 00:39:24 -- scripts/common.sh@367 -- # return 0 00:05:11.774 00:39:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.774 00:39:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.774 --rc genhtml_branch_coverage=1 00:05:11.774 --rc genhtml_function_coverage=1 00:05:11.774 --rc genhtml_legend=1 00:05:11.774 --rc geninfo_all_blocks=1 00:05:11.774 --rc geninfo_unexecuted_blocks=1 00:05:11.774 00:05:11.774 ' 00:05:11.774 00:39:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.774 --rc genhtml_branch_coverage=1 00:05:11.774 --rc genhtml_function_coverage=1 00:05:11.774 --rc genhtml_legend=1 00:05:11.774 --rc geninfo_all_blocks=1 00:05:11.774 --rc geninfo_unexecuted_blocks=1 00:05:11.774 00:05:11.774 ' 00:05:11.774 00:39:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.774 --rc genhtml_branch_coverage=1 00:05:11.774 --rc genhtml_function_coverage=1 00:05:11.774 --rc genhtml_legend=1 00:05:11.774 --rc geninfo_all_blocks=1 00:05:11.774 --rc geninfo_unexecuted_blocks=1 00:05:11.774 00:05:11.774 ' 00:05:11.774 00:39:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.774 --rc genhtml_branch_coverage=1 00:05:11.774 --rc genhtml_function_coverage=1 00:05:11.774 --rc genhtml_legend=1 00:05:11.774 --rc geninfo_all_blocks=1 00:05:11.774 --rc geninfo_unexecuted_blocks=1 00:05:11.774 00:05:11.774 ' 00:05:11.774 00:39:24 -- rpc/rpc.sh@65 -- # spdk_pid=67583 00:05:11.774 00:39:24 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.774 00:39:24 -- rpc/rpc.sh@67 -- # waitforlisten 67583 00:05:11.774 00:39:24 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:11.774 00:39:24 -- common/autotest_common.sh@829 -- # '[' -z 67583 ']' 00:05:11.774 00:39:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.774 00:39:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.774 00:39:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.774 00:39:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.774 00:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.774 [2024-12-03 00:39:24.126784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.774 [2024-12-03 00:39:24.126881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67583 ] 00:05:11.774 [2024-12-03 00:39:24.263919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.033 [2024-12-03 00:39:24.321915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.033 [2024-12-03 00:39:24.322055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.033 [2024-12-03 00:39:24.322067] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67583' to capture a snapshot of events at runtime. 00:05:12.033 [2024-12-03 00:39:24.322075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67583 for offline analysis/debug. 00:05:12.033 [2024-12-03 00:39:24.322099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.969 00:39:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.969 00:39:25 -- common/autotest_common.sh@862 -- # return 0 00:05:12.969 00:39:25 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.969 00:39:25 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.969 00:39:25 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.969 00:39:25 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.969 00:39:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.969 00:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.969 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 ************************************ 00:05:12.969 START TEST rpc_integrity 00:05:12.969 ************************************ 00:05:12.969 00:39:25 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:12.969 00:39:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.969 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.969 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.969 00:39:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.969 00:39:25 -- rpc/rpc.sh@13 -- # jq length 00:05:12.969 00:39:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.969 00:39:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.969 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.969 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.969 00:39:25 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.969 00:39:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.969 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.969 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.969 00:39:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.969 { 00:05:12.969 "aliases": [ 00:05:12.969 "028379b5-c1dc-441a-8715-99566ddab9c2" 00:05:12.969 ], 00:05:12.969 "assigned_rate_limits": { 00:05:12.969 "r_mbytes_per_sec": 0, 00:05:12.969 "rw_ios_per_sec": 0, 00:05:12.969 "rw_mbytes_per_sec": 0, 00:05:12.969 "w_mbytes_per_sec": 0 00:05:12.969 }, 00:05:12.969 "block_size": 512, 00:05:12.969 "claimed": false, 00:05:12.969 "driver_specific": {}, 00:05:12.969 "memory_domains": [ 00:05:12.969 { 00:05:12.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.969 "dma_device_type": 2 00:05:12.969 } 00:05:12.969 ], 00:05:12.969 "name": "Malloc0", 00:05:12.969 "num_blocks": 16384, 00:05:12.969 "product_name": "Malloc disk", 00:05:12.969 "supported_io_types": { 00:05:12.969 "abort": true, 00:05:12.969 "compare": false, 00:05:12.969 "compare_and_write": false, 00:05:12.969 "flush": true, 00:05:12.969 "nvme_admin": false, 00:05:12.969 "nvme_io": false, 00:05:12.969 "read": true, 00:05:12.969 "reset": true, 00:05:12.969 "unmap": true, 00:05:12.969 "write": true, 00:05:12.969 "write_zeroes": true 00:05:12.969 }, 00:05:12.969 "uuid": "028379b5-c1dc-441a-8715-99566ddab9c2", 00:05:12.969 "zoned": false 00:05:12.969 } 00:05:12.969 ]' 00:05:12.969 00:39:25 -- rpc/rpc.sh@17 -- # jq length 00:05:12.969 00:39:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.969 00:39:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.969 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.969 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 [2024-12-03 00:39:25.316339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.969 [2024-12-03 00:39:25.316392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.969 [2024-12-03 00:39:25.316408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x137fb60 00:05:12.969 [2024-12-03 00:39:25.316417] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.969 [2024-12-03 00:39:25.317844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.969 [2024-12-03 00:39:25.317886] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.969 Passthru0 00:05:12.969 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.969 00:39:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.969 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.969 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.969 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.970 00:39:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.970 { 00:05:12.970 "aliases": [ 00:05:12.970 "028379b5-c1dc-441a-8715-99566ddab9c2" 00:05:12.970 ], 00:05:12.970 "assigned_rate_limits": { 00:05:12.970 "r_mbytes_per_sec": 0, 00:05:12.970 "rw_ios_per_sec": 0, 00:05:12.970 "rw_mbytes_per_sec": 0, 00:05:12.970 "w_mbytes_per_sec": 0 00:05:12.970 }, 00:05:12.970 "block_size": 512, 00:05:12.970 "claim_type": "exclusive_write", 00:05:12.970 "claimed": true, 00:05:12.970 "driver_specific": {}, 00:05:12.970 "memory_domains": [ 00:05:12.970 { 00:05:12.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.970 "dma_device_type": 2 00:05:12.970 } 00:05:12.970 ], 00:05:12.970 "name": "Malloc0", 00:05:12.970 "num_blocks": 16384, 00:05:12.970 "product_name": "Malloc disk", 00:05:12.970 "supported_io_types": { 00:05:12.970 "abort": true, 00:05:12.970 "compare": false, 00:05:12.970 "compare_and_write": false, 00:05:12.970 "flush": true, 00:05:12.970 "nvme_admin": false, 00:05:12.970 "nvme_io": false, 00:05:12.970 "read": true, 00:05:12.970 "reset": true, 00:05:12.970 "unmap": true, 00:05:12.970 "write": true, 00:05:12.970 "write_zeroes": true 00:05:12.970 }, 00:05:12.970 "uuid": "028379b5-c1dc-441a-8715-99566ddab9c2", 00:05:12.970 "zoned": false 00:05:12.970 }, 00:05:12.970 { 00:05:12.970 "aliases": [ 00:05:12.970 "733cee13-8004-55b6-b7b6-64c85d5e144e" 00:05:12.970 ], 00:05:12.970 "assigned_rate_limits": { 00:05:12.970 "r_mbytes_per_sec": 0, 00:05:12.970 "rw_ios_per_sec": 0, 00:05:12.970 "rw_mbytes_per_sec": 0, 00:05:12.970 "w_mbytes_per_sec": 0 00:05:12.970 }, 00:05:12.970 "block_size": 512, 00:05:12.970 "claimed": false, 00:05:12.970 "driver_specific": { 00:05:12.970 "passthru": { 00:05:12.970 "base_bdev_name": "Malloc0", 00:05:12.970 "name": "Passthru0" 00:05:12.970 } 00:05:12.970 }, 00:05:12.970 "memory_domains": [ 00:05:12.970 { 00:05:12.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.970 "dma_device_type": 2 00:05:12.970 } 00:05:12.970 ], 00:05:12.970 "name": "Passthru0", 00:05:12.970 "num_blocks": 16384, 00:05:12.970 "product_name": "passthru", 00:05:12.970 "supported_io_types": { 00:05:12.970 "abort": true, 00:05:12.970 "compare": false, 00:05:12.970 "compare_and_write": false, 00:05:12.970 "flush": true, 00:05:12.970 "nvme_admin": false, 00:05:12.970 "nvme_io": false, 00:05:12.970 "read": true, 00:05:12.970 "reset": true, 00:05:12.970 "unmap": true, 00:05:12.970 "write": true, 00:05:12.970 "write_zeroes": true 00:05:12.970 }, 00:05:12.970 "uuid": "733cee13-8004-55b6-b7b6-64c85d5e144e", 00:05:12.970 "zoned": false 00:05:12.970 } 00:05:12.970 ]' 00:05:12.970 00:39:25 -- rpc/rpc.sh@21 -- # jq length 00:05:12.970 00:39:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.970 00:39:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.970 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.970 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.970 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.970 00:39:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.970 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.970 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.970 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.970 00:39:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.970 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.970 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.970 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.970 00:39:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.970 00:39:25 -- rpc/rpc.sh@26 -- # jq length 00:05:12.970 00:39:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.970 00:05:12.970 real 0m0.316s 00:05:12.970 user 0m0.195s 00:05:12.970 sys 0m0.044s 00:05:12.970 00:39:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.970 ************************************ 00:05:12.970 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.970 END TEST rpc_integrity 00:05:12.970 ************************************ 00:05:13.229 00:39:25 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.229 00:39:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.229 00:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 ************************************ 00:05:13.229 START TEST rpc_plugins 00:05:13.229 ************************************ 00:05:13.229 00:39:25 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:13.229 00:39:25 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.229 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.229 00:39:25 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.229 00:39:25 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.229 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.229 00:39:25 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.229 { 00:05:13.229 "aliases": [ 00:05:13.229 "c449ff17-de0a-4153-aa1a-2e6da2ef4e33" 00:05:13.229 ], 00:05:13.229 "assigned_rate_limits": { 00:05:13.229 "r_mbytes_per_sec": 0, 00:05:13.229 "rw_ios_per_sec": 0, 00:05:13.229 "rw_mbytes_per_sec": 0, 00:05:13.229 "w_mbytes_per_sec": 0 00:05:13.229 }, 00:05:13.229 "block_size": 4096, 00:05:13.229 "claimed": false, 00:05:13.229 "driver_specific": {}, 00:05:13.229 "memory_domains": [ 00:05:13.229 { 00:05:13.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.229 "dma_device_type": 2 00:05:13.229 } 00:05:13.229 ], 00:05:13.229 "name": "Malloc1", 00:05:13.229 "num_blocks": 256, 00:05:13.229 "product_name": "Malloc disk", 00:05:13.229 "supported_io_types": { 00:05:13.229 "abort": true, 00:05:13.229 "compare": false, 00:05:13.229 "compare_and_write": false, 00:05:13.229 "flush": true, 00:05:13.229 "nvme_admin": false, 00:05:13.229 "nvme_io": false, 00:05:13.229 "read": true, 00:05:13.229 "reset": true, 00:05:13.229 "unmap": true, 00:05:13.229 "write": true, 00:05:13.229 "write_zeroes": true 00:05:13.229 }, 00:05:13.229 "uuid": "c449ff17-de0a-4153-aa1a-2e6da2ef4e33", 00:05:13.229 "zoned": false 00:05:13.229 } 00:05:13.229 ]' 00:05:13.229 00:39:25 -- rpc/rpc.sh@32 -- # jq length 00:05:13.229 00:39:25 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.229 00:39:25 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.229 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.229 00:39:25 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.229 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.229 00:39:25 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.229 00:39:25 -- rpc/rpc.sh@36 -- # jq length 00:05:13.229 00:39:25 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.229 00:05:13.229 real 0m0.159s 00:05:13.229 user 0m0.102s 00:05:13.229 sys 0m0.020s 00:05:13.229 00:39:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.229 ************************************ 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 END TEST rpc_plugins 00:05:13.229 ************************************ 00:05:13.229 00:39:25 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.229 00:39:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.229 00:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.229 ************************************ 00:05:13.229 START TEST rpc_trace_cmd_test 00:05:13.229 ************************************ 00:05:13.229 00:39:25 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:13.229 00:39:25 -- rpc/rpc.sh@40 -- # local info 00:05:13.229 00:39:25 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.229 00:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.229 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.488 00:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.488 00:39:25 -- rpc/rpc.sh@42 -- # info='{ 00:05:13.488 "bdev": { 00:05:13.488 "mask": "0x8", 00:05:13.488 "tpoint_mask": "0xffffffffffffffff" 00:05:13.488 }, 00:05:13.488 "bdev_nvme": { 00:05:13.488 "mask": "0x4000", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "blobfs": { 00:05:13.488 "mask": "0x80", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "dsa": { 00:05:13.488 "mask": "0x200", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "ftl": { 00:05:13.488 "mask": "0x40", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "iaa": { 00:05:13.488 "mask": "0x1000", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "iscsi_conn": { 00:05:13.488 "mask": "0x2", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "nvme_pcie": { 00:05:13.488 "mask": "0x800", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "nvme_tcp": { 00:05:13.488 "mask": "0x2000", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "nvmf_rdma": { 00:05:13.488 "mask": "0x10", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "nvmf_tcp": { 00:05:13.488 "mask": "0x20", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "scsi": { 00:05:13.488 "mask": "0x4", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "thread": { 00:05:13.488 "mask": "0x400", 00:05:13.488 "tpoint_mask": "0x0" 00:05:13.488 }, 00:05:13.488 "tpoint_group_mask": "0x8", 00:05:13.488 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67583" 00:05:13.488 }' 00:05:13.488 00:39:25 -- rpc/rpc.sh@43 -- # jq length 00:05:13.488 00:39:25 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:13.488 00:39:25 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.488 00:39:25 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.488 00:39:25 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.488 00:39:25 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.488 00:39:25 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.488 00:39:25 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.488 00:39:25 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.747 00:39:26 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.747 00:05:13.747 real 0m0.283s 00:05:13.747 user 0m0.243s 00:05:13.747 sys 0m0.029s 00:05:13.747 00:39:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.747 ************************************ 00:05:13.747 END TEST rpc_trace_cmd_test 00:05:13.747 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.747 ************************************ 00:05:13.747 00:39:26 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:13.747 00:39:26 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:13.747 00:39:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.747 00:39:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.747 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.747 ************************************ 00:05:13.747 START TEST go_rpc 00:05:13.747 ************************************ 00:05:13.747 00:39:26 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:13.747 00:39:26 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:13.747 00:39:26 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:13.747 00:39:26 -- rpc/rpc.sh@52 -- # jq length 00:05:13.747 00:39:26 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:13.747 00:39:26 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.747 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.747 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.747 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.747 00:39:26 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:13.747 00:39:26 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:13.747 00:39:26 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["41e1c6b0-b1a9-47f9-a207-bc299c32372d"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"41e1c6b0-b1a9-47f9-a207-bc299c32372d","zoned":false}]' 00:05:13.747 00:39:26 -- rpc/rpc.sh@57 -- # jq length 00:05:13.747 00:39:26 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:13.747 00:39:26 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.747 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.747 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.747 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.747 00:39:26 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:13.747 00:39:26 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:13.747 00:39:26 -- rpc/rpc.sh@61 -- # jq length 00:05:14.007 00:39:26 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:14.007 00:05:14.007 real 0m0.231s 00:05:14.007 user 0m0.172s 00:05:14.007 sys 0m0.027s 00:05:14.007 00:39:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.007 ************************************ 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 END TEST go_rpc 00:05:14.007 ************************************ 00:05:14.007 00:39:26 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.007 00:39:26 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.007 00:39:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.007 00:39:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 ************************************ 00:05:14.007 START TEST rpc_daemon_integrity 00:05:14.007 ************************************ 00:05:14.007 00:39:26 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:14.007 00:39:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.007 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.007 00:39:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.007 00:39:26 -- rpc/rpc.sh@13 -- # jq length 00:05:14.007 00:39:26 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.007 00:39:26 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.007 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.007 00:39:26 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:14.007 00:39:26 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.007 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.007 00:39:26 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.007 { 00:05:14.007 "aliases": [ 00:05:14.007 "a6332331-a040-49bf-945e-8663ab24d43e" 00:05:14.007 ], 00:05:14.007 "assigned_rate_limits": { 00:05:14.007 "r_mbytes_per_sec": 0, 00:05:14.007 "rw_ios_per_sec": 0, 00:05:14.007 "rw_mbytes_per_sec": 0, 00:05:14.007 "w_mbytes_per_sec": 0 00:05:14.007 }, 00:05:14.007 "block_size": 512, 00:05:14.007 "claimed": false, 00:05:14.007 "driver_specific": {}, 00:05:14.007 "memory_domains": [ 00:05:14.007 { 00:05:14.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.007 "dma_device_type": 2 00:05:14.007 } 00:05:14.007 ], 00:05:14.007 "name": "Malloc3", 00:05:14.007 "num_blocks": 16384, 00:05:14.007 "product_name": "Malloc disk", 00:05:14.007 "supported_io_types": { 00:05:14.007 "abort": true, 00:05:14.007 "compare": false, 00:05:14.007 "compare_and_write": false, 00:05:14.007 "flush": true, 00:05:14.007 "nvme_admin": false, 00:05:14.007 "nvme_io": false, 00:05:14.007 "read": true, 00:05:14.007 "reset": true, 00:05:14.007 "unmap": true, 00:05:14.007 "write": true, 00:05:14.007 "write_zeroes": true 00:05:14.007 }, 00:05:14.007 "uuid": "a6332331-a040-49bf-945e-8663ab24d43e", 00:05:14.007 "zoned": false 00:05:14.007 } 00:05:14.007 ]' 00:05:14.007 00:39:26 -- rpc/rpc.sh@17 -- # jq length 00:05:14.007 00:39:26 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.007 00:39:26 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:14.007 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 [2024-12-03 00:39:26.508857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:14.007 [2024-12-03 00:39:26.508908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.007 [2024-12-03 00:39:26.508923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1381990 00:05:14.007 [2024-12-03 00:39:26.508931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.007 [2024-12-03 00:39:26.510108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.007 [2024-12-03 00:39:26.510126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.007 Passthru0 00:05:14.007 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.007 00:39:26 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.007 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.007 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.267 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.267 00:39:26 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.267 { 00:05:14.267 "aliases": [ 00:05:14.267 "a6332331-a040-49bf-945e-8663ab24d43e" 00:05:14.267 ], 00:05:14.267 "assigned_rate_limits": { 00:05:14.267 "r_mbytes_per_sec": 0, 00:05:14.267 "rw_ios_per_sec": 0, 00:05:14.267 "rw_mbytes_per_sec": 0, 00:05:14.267 "w_mbytes_per_sec": 0 00:05:14.267 }, 00:05:14.267 "block_size": 512, 00:05:14.267 "claim_type": "exclusive_write", 00:05:14.267 "claimed": true, 00:05:14.267 "driver_specific": {}, 00:05:14.267 "memory_domains": [ 00:05:14.267 { 00:05:14.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.267 "dma_device_type": 2 00:05:14.267 } 00:05:14.267 ], 00:05:14.267 "name": "Malloc3", 00:05:14.267 "num_blocks": 16384, 00:05:14.267 "product_name": "Malloc disk", 00:05:14.267 "supported_io_types": { 00:05:14.267 "abort": true, 00:05:14.267 "compare": false, 00:05:14.267 "compare_and_write": false, 00:05:14.267 "flush": true, 00:05:14.267 "nvme_admin": false, 00:05:14.267 "nvme_io": false, 00:05:14.267 "read": true, 00:05:14.267 "reset": true, 00:05:14.267 "unmap": true, 00:05:14.267 "write": true, 00:05:14.267 "write_zeroes": true 00:05:14.267 }, 00:05:14.267 "uuid": "a6332331-a040-49bf-945e-8663ab24d43e", 00:05:14.267 "zoned": false 00:05:14.267 }, 00:05:14.267 { 00:05:14.267 "aliases": [ 00:05:14.267 "7fe56114-e978-5244-bc89-fd4076e5fab2" 00:05:14.267 ], 00:05:14.267 "assigned_rate_limits": { 00:05:14.267 "r_mbytes_per_sec": 0, 00:05:14.267 "rw_ios_per_sec": 0, 00:05:14.267 "rw_mbytes_per_sec": 0, 00:05:14.267 "w_mbytes_per_sec": 0 00:05:14.267 }, 00:05:14.267 "block_size": 512, 00:05:14.267 "claimed": false, 00:05:14.267 "driver_specific": { 00:05:14.267 "passthru": { 00:05:14.267 "base_bdev_name": "Malloc3", 00:05:14.267 "name": "Passthru0" 00:05:14.267 } 00:05:14.267 }, 00:05:14.267 "memory_domains": [ 00:05:14.267 { 00:05:14.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.267 "dma_device_type": 2 00:05:14.267 } 00:05:14.267 ], 00:05:14.267 "name": "Passthru0", 00:05:14.267 "num_blocks": 16384, 00:05:14.267 "product_name": "passthru", 00:05:14.267 "supported_io_types": { 00:05:14.267 "abort": true, 00:05:14.267 "compare": false, 00:05:14.267 "compare_and_write": false, 00:05:14.267 "flush": true, 00:05:14.267 "nvme_admin": false, 00:05:14.267 "nvme_io": false, 00:05:14.267 "read": true, 00:05:14.267 "reset": true, 00:05:14.267 "unmap": true, 00:05:14.267 "write": true, 00:05:14.267 "write_zeroes": true 00:05:14.267 }, 00:05:14.267 "uuid": "7fe56114-e978-5244-bc89-fd4076e5fab2", 00:05:14.267 "zoned": false 00:05:14.267 } 00:05:14.267 ]' 00:05:14.267 00:39:26 -- rpc/rpc.sh@21 -- # jq length 00:05:14.267 00:39:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.267 00:39:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.267 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.267 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.267 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.267 00:39:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:14.267 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.267 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.267 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.267 00:39:26 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.268 00:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.268 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.268 00:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.268 00:39:26 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.268 00:39:26 -- rpc/rpc.sh@26 -- # jq length 00:05:14.268 00:39:26 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.268 00:05:14.268 real 0m0.308s 00:05:14.268 user 0m0.214s 00:05:14.268 sys 0m0.036s 00:05:14.268 00:39:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.268 ************************************ 00:05:14.268 END TEST rpc_daemon_integrity 00:05:14.268 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.268 ************************************ 00:05:14.268 00:39:26 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.268 00:39:26 -- rpc/rpc.sh@84 -- # killprocess 67583 00:05:14.268 00:39:26 -- common/autotest_common.sh@936 -- # '[' -z 67583 ']' 00:05:14.268 00:39:26 -- common/autotest_common.sh@940 -- # kill -0 67583 00:05:14.268 00:39:26 -- common/autotest_common.sh@941 -- # uname 00:05:14.268 00:39:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.268 00:39:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67583 00:05:14.268 00:39:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.268 00:39:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.268 killing process with pid 67583 00:05:14.268 00:39:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67583' 00:05:14.268 00:39:26 -- common/autotest_common.sh@955 -- # kill 67583 00:05:14.268 00:39:26 -- common/autotest_common.sh@960 -- # wait 67583 00:05:14.837 00:05:14.837 real 0m3.224s 00:05:14.837 user 0m4.258s 00:05:14.837 sys 0m0.786s 00:05:14.837 00:39:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.837 ************************************ 00:05:14.837 END TEST rpc 00:05:14.837 ************************************ 00:05:14.837 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.837 00:39:27 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.837 00:39:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.837 00:39:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.837 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.837 ************************************ 00:05:14.837 START TEST rpc_client 00:05:14.837 ************************************ 00:05:14.837 00:39:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.837 * Looking for test storage... 00:05:14.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.837 00:39:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.837 00:39:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.837 00:39:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.837 00:39:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.837 00:39:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.837 00:39:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.837 00:39:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.837 00:39:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.837 00:39:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.837 00:39:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.837 00:39:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.837 00:39:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.837 00:39:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.837 00:39:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.837 00:39:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.837 00:39:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.837 00:39:27 -- scripts/common.sh@344 -- # : 1 00:05:14.837 00:39:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.837 00:39:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.837 00:39:27 -- scripts/common.sh@364 -- # decimal 1 00:05:14.837 00:39:27 -- scripts/common.sh@352 -- # local d=1 00:05:14.837 00:39:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.837 00:39:27 -- scripts/common.sh@354 -- # echo 1 00:05:14.837 00:39:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.837 00:39:27 -- scripts/common.sh@365 -- # decimal 2 00:05:14.837 00:39:27 -- scripts/common.sh@352 -- # local d=2 00:05:14.837 00:39:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.837 00:39:27 -- scripts/common.sh@354 -- # echo 2 00:05:14.837 00:39:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.837 00:39:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.837 00:39:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.837 00:39:27 -- scripts/common.sh@367 -- # return 0 00:05:14.837 00:39:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.837 00:39:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.837 --rc genhtml_branch_coverage=1 00:05:14.837 --rc genhtml_function_coverage=1 00:05:14.837 --rc genhtml_legend=1 00:05:14.837 --rc geninfo_all_blocks=1 00:05:14.837 --rc geninfo_unexecuted_blocks=1 00:05:14.837 00:05:14.837 ' 00:05:14.837 00:39:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.837 --rc genhtml_branch_coverage=1 00:05:14.837 --rc genhtml_function_coverage=1 00:05:14.837 --rc genhtml_legend=1 00:05:14.837 --rc geninfo_all_blocks=1 00:05:14.837 --rc geninfo_unexecuted_blocks=1 00:05:14.837 00:05:14.837 ' 00:05:14.837 00:39:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.837 --rc genhtml_branch_coverage=1 00:05:14.837 --rc genhtml_function_coverage=1 00:05:14.837 --rc genhtml_legend=1 00:05:14.837 --rc geninfo_all_blocks=1 00:05:14.837 --rc geninfo_unexecuted_blocks=1 00:05:14.837 00:05:14.837 ' 00:05:14.837 00:39:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.837 --rc genhtml_branch_coverage=1 00:05:14.837 --rc genhtml_function_coverage=1 00:05:14.837 --rc genhtml_legend=1 00:05:14.837 --rc geninfo_all_blocks=1 00:05:14.837 --rc geninfo_unexecuted_blocks=1 00:05:14.837 00:05:14.837 ' 00:05:14.837 00:39:27 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:14.837 OK 00:05:14.837 00:39:27 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.837 00:05:14.837 real 0m0.204s 00:05:14.837 user 0m0.121s 00:05:14.837 sys 0m0.096s 00:05:14.837 00:39:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.837 ************************************ 00:05:14.837 END TEST rpc_client 00:05:14.837 ************************************ 00:05:14.837 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 00:39:27 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:15.097 00:39:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.097 00:39:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.097 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 ************************************ 00:05:15.097 START TEST json_config 00:05:15.097 ************************************ 00:05:15.097 00:39:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:15.097 00:39:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:15.097 00:39:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:15.097 00:39:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:15.097 00:39:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:15.097 00:39:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:15.097 00:39:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:15.097 00:39:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:15.097 00:39:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:15.097 00:39:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:15.097 00:39:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.097 00:39:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:15.097 00:39:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:15.097 00:39:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:15.097 00:39:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:15.097 00:39:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:15.097 00:39:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:15.097 00:39:27 -- scripts/common.sh@344 -- # : 1 00:05:15.097 00:39:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:15.097 00:39:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.097 00:39:27 -- scripts/common.sh@364 -- # decimal 1 00:05:15.097 00:39:27 -- scripts/common.sh@352 -- # local d=1 00:05:15.097 00:39:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.097 00:39:27 -- scripts/common.sh@354 -- # echo 1 00:05:15.097 00:39:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:15.097 00:39:27 -- scripts/common.sh@365 -- # decimal 2 00:05:15.097 00:39:27 -- scripts/common.sh@352 -- # local d=2 00:05:15.097 00:39:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.097 00:39:27 -- scripts/common.sh@354 -- # echo 2 00:05:15.097 00:39:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:15.097 00:39:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:15.097 00:39:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:15.097 00:39:27 -- scripts/common.sh@367 -- # return 0 00:05:15.097 00:39:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.097 00:39:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.097 --rc genhtml_branch_coverage=1 00:05:15.097 --rc genhtml_function_coverage=1 00:05:15.097 --rc genhtml_legend=1 00:05:15.097 --rc geninfo_all_blocks=1 00:05:15.097 --rc geninfo_unexecuted_blocks=1 00:05:15.097 00:05:15.097 ' 00:05:15.097 00:39:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.097 --rc genhtml_branch_coverage=1 00:05:15.097 --rc genhtml_function_coverage=1 00:05:15.097 --rc genhtml_legend=1 00:05:15.097 --rc geninfo_all_blocks=1 00:05:15.097 --rc geninfo_unexecuted_blocks=1 00:05:15.097 00:05:15.097 ' 00:05:15.097 00:39:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.097 --rc genhtml_branch_coverage=1 00:05:15.097 --rc genhtml_function_coverage=1 00:05:15.097 --rc genhtml_legend=1 00:05:15.097 --rc geninfo_all_blocks=1 00:05:15.097 --rc geninfo_unexecuted_blocks=1 00:05:15.097 00:05:15.097 ' 00:05:15.097 00:39:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.098 --rc genhtml_branch_coverage=1 00:05:15.098 --rc genhtml_function_coverage=1 00:05:15.098 --rc genhtml_legend=1 00:05:15.098 --rc geninfo_all_blocks=1 00:05:15.098 --rc geninfo_unexecuted_blocks=1 00:05:15.098 00:05:15.098 ' 00:05:15.098 00:39:27 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:15.098 00:39:27 -- nvmf/common.sh@7 -- # uname -s 00:05:15.098 00:39:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.098 00:39:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.098 00:39:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.098 00:39:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.098 00:39:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.098 00:39:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.098 00:39:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.098 00:39:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.098 00:39:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.098 00:39:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.098 00:39:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:05:15.098 00:39:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:05:15.098 00:39:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.098 00:39:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.098 00:39:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.098 00:39:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:15.098 00:39:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.098 00:39:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.098 00:39:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.098 00:39:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.098 00:39:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.098 00:39:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.098 00:39:27 -- paths/export.sh@5 -- # export PATH 00:05:15.098 00:39:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.098 00:39:27 -- nvmf/common.sh@46 -- # : 0 00:05:15.098 00:39:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:15.098 00:39:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:15.098 00:39:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:15.098 00:39:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.098 00:39:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.098 00:39:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:15.098 00:39:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:15.098 00:39:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:15.098 00:39:27 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.098 00:39:27 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.098 00:39:27 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:15.098 00:39:27 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.098 00:39:27 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:15.098 00:39:27 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.098 00:39:27 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:15.098 00:39:27 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:15.098 00:39:27 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:15.098 00:39:27 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:15.098 00:39:27 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.098 INFO: JSON configuration test init 00:05:15.098 00:39:27 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:15.098 00:39:27 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:15.098 00:39:27 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:15.098 00:39:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.098 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 00:39:27 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:15.098 00:39:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.098 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 00:39:27 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.098 00:39:27 -- json_config/json_config.sh@98 -- # local app=target 00:05:15.098 00:39:27 -- json_config/json_config.sh@99 -- # shift 00:05:15.098 00:39:27 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:15.098 00:39:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:15.098 00:39:27 -- json_config/json_config.sh@111 -- # app_pid[$app]=67906 00:05:15.098 Waiting for target to run... 00:05:15.098 00:39:27 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:15.098 00:39:27 -- json_config/json_config.sh@114 -- # waitforlisten 67906 /var/tmp/spdk_tgt.sock 00:05:15.098 00:39:27 -- common/autotest_common.sh@829 -- # '[' -z 67906 ']' 00:05:15.098 00:39:27 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.098 00:39:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.098 00:39:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.098 00:39:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.098 00:39:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.098 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.357 [2024-12-03 00:39:27.629690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.357 [2024-12-03 00:39:27.629802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67906 ] 00:05:15.616 [2024-12-03 00:39:28.032054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.616 [2024-12-03 00:39:28.078922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.616 [2024-12-03 00:39:28.079078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.184 00:39:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.184 00:05:16.184 00:39:28 -- common/autotest_common.sh@862 -- # return 0 00:05:16.184 00:39:28 -- json_config/json_config.sh@115 -- # echo '' 00:05:16.184 00:39:28 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:16.184 00:39:28 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:16.184 00:39:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.184 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 00:39:28 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:16.443 00:39:28 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:16.443 00:39:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.443 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 00:39:28 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.443 00:39:28 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:16.443 00:39:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.719 00:39:29 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:16.719 00:39:29 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:16.719 00:39:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.719 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.719 00:39:29 -- json_config/json_config.sh@48 -- # local ret=0 00:05:16.719 00:39:29 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.719 00:39:29 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:16.719 00:39:29 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:16.719 00:39:29 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:16.719 00:39:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:16.977 00:39:29 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:16.977 00:39:29 -- json_config/json_config.sh@51 -- # local get_types 00:05:16.977 00:39:29 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:16.977 00:39:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.977 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.977 00:39:29 -- json_config/json_config.sh@58 -- # return 0 00:05:16.977 00:39:29 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:16.977 00:39:29 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:16.977 00:39:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.977 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.977 00:39:29 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:16.977 00:39:29 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:16.977 00:39:29 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.977 00:39:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.236 MallocForNvmf0 00:05:17.494 00:39:29 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.494 00:39:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.494 MallocForNvmf1 00:05:17.494 00:39:29 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.494 00:39:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.752 [2024-12-03 00:39:30.145309] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.752 00:39:30 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.752 00:39:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.011 00:39:30 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.011 00:39:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.269 00:39:30 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.269 00:39:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.269 00:39:30 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.269 00:39:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.526 [2024-12-03 00:39:30.945723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.526 00:39:30 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:18.526 00:39:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.526 00:39:30 -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 00:39:31 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:18.526 00:39:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.526 00:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.784 00:39:31 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:18.784 00:39:31 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.784 00:39:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.042 MallocBdevForConfigChangeCheck 00:05:19.042 00:39:31 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:19.042 00:39:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.042 00:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:19.042 00:39:31 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:19.042 00:39:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.300 INFO: shutting down applications... 00:05:19.300 00:39:31 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:19.300 00:39:31 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:19.300 00:39:31 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:19.300 00:39:31 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:19.300 00:39:31 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.559 Calling clear_iscsi_subsystem 00:05:19.559 Calling clear_nvmf_subsystem 00:05:19.559 Calling clear_nbd_subsystem 00:05:19.559 Calling clear_ublk_subsystem 00:05:19.559 Calling clear_vhost_blk_subsystem 00:05:19.559 Calling clear_vhost_scsi_subsystem 00:05:19.559 Calling clear_scheduler_subsystem 00:05:19.559 Calling clear_bdev_subsystem 00:05:19.559 Calling clear_accel_subsystem 00:05:19.559 Calling clear_vmd_subsystem 00:05:19.559 Calling clear_sock_subsystem 00:05:19.559 Calling clear_iobuf_subsystem 00:05:19.817 00:39:32 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:19.817 00:39:32 -- json_config/json_config.sh@396 -- # count=100 00:05:19.817 00:39:32 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:19.817 00:39:32 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.817 00:39:32 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.817 00:39:32 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.076 00:39:32 -- json_config/json_config.sh@398 -- # break 00:05:20.076 00:39:32 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:20.076 00:39:32 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:20.076 00:39:32 -- json_config/json_config.sh@120 -- # local app=target 00:05:20.076 00:39:32 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:20.076 00:39:32 -- json_config/json_config.sh@124 -- # [[ -n 67906 ]] 00:05:20.076 00:39:32 -- json_config/json_config.sh@127 -- # kill -SIGINT 67906 00:05:20.076 00:39:32 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:20.076 00:39:32 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:20.076 00:39:32 -- json_config/json_config.sh@130 -- # kill -0 67906 00:05:20.076 00:39:32 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:20.644 00:39:32 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:20.644 00:39:32 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:20.644 00:39:32 -- json_config/json_config.sh@130 -- # kill -0 67906 00:05:20.644 00:39:32 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:20.644 00:39:32 -- json_config/json_config.sh@132 -- # break 00:05:20.644 00:39:32 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:20.644 00:39:32 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:20.644 SPDK target shutdown done 00:05:20.644 INFO: relaunching applications... 00:05:20.644 00:39:32 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:20.644 00:39:32 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.644 00:39:32 -- json_config/json_config.sh@98 -- # local app=target 00:05:20.644 00:39:32 -- json_config/json_config.sh@99 -- # shift 00:05:20.644 00:39:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:20.644 00:39:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:20.644 00:39:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:20.644 00:39:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:20.644 00:39:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:20.644 Waiting for target to run... 00:05:20.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.644 00:39:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=68175 00:05:20.644 00:39:32 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.644 00:39:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:20.644 00:39:32 -- json_config/json_config.sh@114 -- # waitforlisten 68175 /var/tmp/spdk_tgt.sock 00:05:20.644 00:39:32 -- common/autotest_common.sh@829 -- # '[' -z 68175 ']' 00:05:20.644 00:39:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.644 00:39:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.644 00:39:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.644 00:39:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.644 00:39:32 -- common/autotest_common.sh@10 -- # set +x 00:05:20.644 [2024-12-03 00:39:32.986217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.644 [2024-12-03 00:39:32.986676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68175 ] 00:05:21.211 [2024-12-03 00:39:33.501075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.211 [2024-12-03 00:39:33.567299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.211 [2024-12-03 00:39:33.567755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.470 [2024-12-03 00:39:33.865046] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.470 [2024-12-03 00:39:33.897138] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.407 00:39:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.407 00:39:34 -- common/autotest_common.sh@862 -- # return 0 00:05:22.407 00:05:22.407 INFO: Checking if target configuration is the same... 00:05:22.407 00:39:34 -- json_config/json_config.sh@115 -- # echo '' 00:05:22.407 00:39:34 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:22.407 00:39:34 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.407 00:39:34 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.407 00:39:34 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:22.407 00:39:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.407 + '[' 2 -ne 2 ']' 00:05:22.407 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.407 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.407 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.407 +++ basename /dev/fd/62 00:05:22.407 ++ mktemp /tmp/62.XXX 00:05:22.407 + tmp_file_1=/tmp/62.njQ 00:05:22.407 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.407 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.407 + tmp_file_2=/tmp/spdk_tgt_config.json.cdN 00:05:22.407 + ret=0 00:05:22.407 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.666 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.666 + diff -u /tmp/62.njQ /tmp/spdk_tgt_config.json.cdN 00:05:22.666 INFO: JSON config files are the same 00:05:22.666 + echo 'INFO: JSON config files are the same' 00:05:22.666 + rm /tmp/62.njQ /tmp/spdk_tgt_config.json.cdN 00:05:22.666 + exit 0 00:05:22.666 00:39:35 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:22.666 INFO: changing configuration and checking if this can be detected... 00:05:22.666 00:39:35 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.666 00:39:35 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.666 00:39:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.925 00:39:35 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.925 00:39:35 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:22.925 00:39:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.925 + '[' 2 -ne 2 ']' 00:05:22.925 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.925 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.925 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.925 +++ basename /dev/fd/62 00:05:22.925 ++ mktemp /tmp/62.XXX 00:05:22.925 + tmp_file_1=/tmp/62.wSd 00:05:22.925 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.925 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.925 + tmp_file_2=/tmp/spdk_tgt_config.json.wwJ 00:05:22.925 + ret=0 00:05:22.925 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.492 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.492 + diff -u /tmp/62.wSd /tmp/spdk_tgt_config.json.wwJ 00:05:23.492 + ret=1 00:05:23.492 + echo '=== Start of file: /tmp/62.wSd ===' 00:05:23.492 + cat /tmp/62.wSd 00:05:23.492 + echo '=== End of file: /tmp/62.wSd ===' 00:05:23.492 + echo '' 00:05:23.492 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wwJ ===' 00:05:23.492 + cat /tmp/spdk_tgt_config.json.wwJ 00:05:23.492 + echo '=== End of file: /tmp/spdk_tgt_config.json.wwJ ===' 00:05:23.492 + echo '' 00:05:23.492 + rm /tmp/62.wSd /tmp/spdk_tgt_config.json.wwJ 00:05:23.492 + exit 1 00:05:23.492 INFO: configuration change detected. 00:05:23.492 00:39:35 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:23.492 00:39:35 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:23.492 00:39:35 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:23.492 00:39:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.492 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.492 00:39:35 -- json_config/json_config.sh@360 -- # local ret=0 00:05:23.492 00:39:35 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:23.492 00:39:35 -- json_config/json_config.sh@370 -- # [[ -n 68175 ]] 00:05:23.492 00:39:35 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:23.492 00:39:35 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.492 00:39:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.492 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.492 00:39:35 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:23.492 00:39:35 -- json_config/json_config.sh@246 -- # uname -s 00:05:23.492 00:39:35 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:23.492 00:39:35 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:23.492 00:39:35 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:23.492 00:39:35 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.492 00:39:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.492 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.492 00:39:35 -- json_config/json_config.sh@376 -- # killprocess 68175 00:05:23.492 00:39:35 -- common/autotest_common.sh@936 -- # '[' -z 68175 ']' 00:05:23.492 00:39:35 -- common/autotest_common.sh@940 -- # kill -0 68175 00:05:23.492 00:39:35 -- common/autotest_common.sh@941 -- # uname 00:05:23.492 00:39:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.492 00:39:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68175 00:05:23.492 00:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.492 killing process with pid 68175 00:05:23.492 00:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.492 00:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68175' 00:05:23.492 00:39:35 -- common/autotest_common.sh@955 -- # kill 68175 00:05:23.492 00:39:35 -- common/autotest_common.sh@960 -- # wait 68175 00:05:23.751 00:39:36 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.751 00:39:36 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:23.751 00:39:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.751 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.751 INFO: Success 00:05:23.751 00:39:36 -- json_config/json_config.sh@381 -- # return 0 00:05:23.751 00:39:36 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:23.751 ************************************ 00:05:23.751 END TEST json_config 00:05:23.751 ************************************ 00:05:23.751 00:05:23.751 real 0m8.769s 00:05:23.751 user 0m12.134s 00:05:23.751 sys 0m1.933s 00:05:23.751 00:39:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.751 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.751 00:39:36 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.751 00:39:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.751 00:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.751 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.751 ************************************ 00:05:23.751 START TEST json_config_extra_key 00:05:23.751 ************************************ 00:05:23.751 00:39:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.751 00:39:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.751 00:39:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.751 00:39:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.014 00:39:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.014 00:39:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.014 00:39:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.014 00:39:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.014 00:39:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.014 00:39:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.014 00:39:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.014 00:39:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.014 00:39:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.014 00:39:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.014 00:39:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.014 00:39:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.014 00:39:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.014 00:39:36 -- scripts/common.sh@344 -- # : 1 00:05:24.014 00:39:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.014 00:39:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.014 00:39:36 -- scripts/common.sh@364 -- # decimal 1 00:05:24.014 00:39:36 -- scripts/common.sh@352 -- # local d=1 00:05:24.014 00:39:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.014 00:39:36 -- scripts/common.sh@354 -- # echo 1 00:05:24.014 00:39:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.014 00:39:36 -- scripts/common.sh@365 -- # decimal 2 00:05:24.014 00:39:36 -- scripts/common.sh@352 -- # local d=2 00:05:24.014 00:39:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.014 00:39:36 -- scripts/common.sh@354 -- # echo 2 00:05:24.014 00:39:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.014 00:39:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.014 00:39:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.014 00:39:36 -- scripts/common.sh@367 -- # return 0 00:05:24.014 00:39:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.014 00:39:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.014 --rc genhtml_branch_coverage=1 00:05:24.014 --rc genhtml_function_coverage=1 00:05:24.014 --rc genhtml_legend=1 00:05:24.014 --rc geninfo_all_blocks=1 00:05:24.014 --rc geninfo_unexecuted_blocks=1 00:05:24.014 00:05:24.014 ' 00:05:24.014 00:39:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.014 --rc genhtml_branch_coverage=1 00:05:24.014 --rc genhtml_function_coverage=1 00:05:24.014 --rc genhtml_legend=1 00:05:24.014 --rc geninfo_all_blocks=1 00:05:24.014 --rc geninfo_unexecuted_blocks=1 00:05:24.014 00:05:24.014 ' 00:05:24.014 00:39:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.014 --rc genhtml_branch_coverage=1 00:05:24.014 --rc genhtml_function_coverage=1 00:05:24.014 --rc genhtml_legend=1 00:05:24.014 --rc geninfo_all_blocks=1 00:05:24.014 --rc geninfo_unexecuted_blocks=1 00:05:24.014 00:05:24.014 ' 00:05:24.014 00:39:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.014 --rc genhtml_branch_coverage=1 00:05:24.014 --rc genhtml_function_coverage=1 00:05:24.014 --rc genhtml_legend=1 00:05:24.014 --rc geninfo_all_blocks=1 00:05:24.014 --rc geninfo_unexecuted_blocks=1 00:05:24.014 00:05:24.014 ' 00:05:24.014 00:39:36 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.014 00:39:36 -- nvmf/common.sh@7 -- # uname -s 00:05:24.014 00:39:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.014 00:39:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.014 00:39:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.014 00:39:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.014 00:39:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.014 00:39:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.014 00:39:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.014 00:39:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.014 00:39:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.014 00:39:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.014 00:39:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:05:24.014 00:39:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:05:24.014 00:39:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.014 00:39:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.014 00:39:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.014 00:39:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.014 00:39:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.014 00:39:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.014 00:39:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.014 00:39:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.014 00:39:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.015 00:39:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.015 00:39:36 -- paths/export.sh@5 -- # export PATH 00:05:24.015 00:39:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.015 00:39:36 -- nvmf/common.sh@46 -- # : 0 00:05:24.015 00:39:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:24.015 00:39:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:24.015 00:39:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:24.015 00:39:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.015 00:39:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.015 00:39:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:24.015 00:39:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:24.015 00:39:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:24.015 INFO: launching applications... 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.015 Waiting for target to run... 00:05:24.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68366 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68366 /var/tmp/spdk_tgt.sock 00:05:24.015 00:39:36 -- common/autotest_common.sh@829 -- # '[' -z 68366 ']' 00:05:24.015 00:39:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.015 00:39:36 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.015 00:39:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.015 00:39:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.015 00:39:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.015 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.015 [2024-12-03 00:39:36.460965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.015 [2024-12-03 00:39:36.461273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68366 ] 00:05:24.620 [2024-12-03 00:39:36.892140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.620 [2024-12-03 00:39:36.944002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.620 [2024-12-03 00:39:36.944137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.895 00:05:24.895 INFO: shutting down applications... 00:05:24.895 00:39:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.895 00:39:37 -- common/autotest_common.sh@862 -- # return 0 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68366 ]] 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68366 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68366 00:05:24.895 00:39:37 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:25.464 SPDK target shutdown done 00:05:25.464 Success 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68366 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:25.464 00:39:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:25.464 00:05:25.464 real 0m1.629s 00:05:25.464 user 0m1.366s 00:05:25.464 sys 0m0.429s 00:05:25.464 ************************************ 00:05:25.464 END TEST json_config_extra_key 00:05:25.464 ************************************ 00:05:25.464 00:39:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.464 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:25.464 00:39:37 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.464 00:39:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.464 00:39:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.464 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:25.464 ************************************ 00:05:25.464 START TEST alias_rpc 00:05:25.464 ************************************ 00:05:25.464 00:39:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.464 * Looking for test storage... 00:05:25.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:25.723 00:39:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.723 00:39:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.723 00:39:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.723 00:39:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.723 00:39:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.723 00:39:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.723 00:39:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.723 00:39:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.723 00:39:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.723 00:39:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.723 00:39:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.723 00:39:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.723 00:39:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.723 00:39:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.723 00:39:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.723 00:39:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.723 00:39:38 -- scripts/common.sh@344 -- # : 1 00:05:25.723 00:39:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.723 00:39:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.723 00:39:38 -- scripts/common.sh@364 -- # decimal 1 00:05:25.723 00:39:38 -- scripts/common.sh@352 -- # local d=1 00:05:25.723 00:39:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.723 00:39:38 -- scripts/common.sh@354 -- # echo 1 00:05:25.723 00:39:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.723 00:39:38 -- scripts/common.sh@365 -- # decimal 2 00:05:25.723 00:39:38 -- scripts/common.sh@352 -- # local d=2 00:05:25.723 00:39:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.723 00:39:38 -- scripts/common.sh@354 -- # echo 2 00:05:25.723 00:39:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.723 00:39:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.723 00:39:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.723 00:39:38 -- scripts/common.sh@367 -- # return 0 00:05:25.723 00:39:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.723 00:39:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.723 --rc genhtml_branch_coverage=1 00:05:25.723 --rc genhtml_function_coverage=1 00:05:25.723 --rc genhtml_legend=1 00:05:25.723 --rc geninfo_all_blocks=1 00:05:25.723 --rc geninfo_unexecuted_blocks=1 00:05:25.723 00:05:25.723 ' 00:05:25.723 00:39:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.723 --rc genhtml_branch_coverage=1 00:05:25.723 --rc genhtml_function_coverage=1 00:05:25.723 --rc genhtml_legend=1 00:05:25.723 --rc geninfo_all_blocks=1 00:05:25.723 --rc geninfo_unexecuted_blocks=1 00:05:25.723 00:05:25.723 ' 00:05:25.723 00:39:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.723 --rc genhtml_branch_coverage=1 00:05:25.723 --rc genhtml_function_coverage=1 00:05:25.723 --rc genhtml_legend=1 00:05:25.723 --rc geninfo_all_blocks=1 00:05:25.723 --rc geninfo_unexecuted_blocks=1 00:05:25.723 00:05:25.723 ' 00:05:25.723 00:39:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.723 --rc genhtml_branch_coverage=1 00:05:25.723 --rc genhtml_function_coverage=1 00:05:25.723 --rc genhtml_legend=1 00:05:25.723 --rc geninfo_all_blocks=1 00:05:25.723 --rc geninfo_unexecuted_blocks=1 00:05:25.723 00:05:25.723 ' 00:05:25.723 00:39:38 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.723 00:39:38 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68455 00:05:25.723 00:39:38 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68455 00:05:25.723 00:39:38 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.723 00:39:38 -- common/autotest_common.sh@829 -- # '[' -z 68455 ']' 00:05:25.723 00:39:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.723 00:39:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.723 00:39:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.723 00:39:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.723 00:39:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.723 [2024-12-03 00:39:38.165704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.723 [2024-12-03 00:39:38.166137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68455 ] 00:05:25.982 [2024-12-03 00:39:38.305236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.982 [2024-12-03 00:39:38.360343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.982 [2024-12-03 00:39:38.360796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.918 00:39:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.918 00:39:39 -- common/autotest_common.sh@862 -- # return 0 00:05:26.918 00:39:39 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:26.918 00:39:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68455 00:05:26.918 00:39:39 -- common/autotest_common.sh@936 -- # '[' -z 68455 ']' 00:05:26.918 00:39:39 -- common/autotest_common.sh@940 -- # kill -0 68455 00:05:26.918 00:39:39 -- common/autotest_common.sh@941 -- # uname 00:05:26.918 00:39:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.918 00:39:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68455 00:05:27.176 killing process with pid 68455 00:05:27.176 00:39:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.176 00:39:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.176 00:39:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68455' 00:05:27.176 00:39:39 -- common/autotest_common.sh@955 -- # kill 68455 00:05:27.176 00:39:39 -- common/autotest_common.sh@960 -- # wait 68455 00:05:27.436 ************************************ 00:05:27.436 END TEST alias_rpc 00:05:27.436 ************************************ 00:05:27.436 00:05:27.436 real 0m1.886s 00:05:27.436 user 0m2.076s 00:05:27.436 sys 0m0.503s 00:05:27.436 00:39:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.436 00:39:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.436 00:39:39 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:27.436 00:39:39 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.436 00:39:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.436 00:39:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.436 00:39:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.436 ************************************ 00:05:27.436 START TEST dpdk_mem_utility 00:05:27.436 ************************************ 00:05:27.436 00:39:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.436 * Looking for test storage... 00:05:27.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:27.436 00:39:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.436 00:39:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.436 00:39:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.695 00:39:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.695 00:39:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.695 00:39:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.695 00:39:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.695 00:39:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.695 00:39:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.695 00:39:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.695 00:39:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.695 00:39:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.695 00:39:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.695 00:39:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.695 00:39:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.695 00:39:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.695 00:39:39 -- scripts/common.sh@344 -- # : 1 00:05:27.695 00:39:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.695 00:39:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.695 00:39:39 -- scripts/common.sh@364 -- # decimal 1 00:05:27.695 00:39:39 -- scripts/common.sh@352 -- # local d=1 00:05:27.695 00:39:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.695 00:39:39 -- scripts/common.sh@354 -- # echo 1 00:05:27.695 00:39:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.695 00:39:40 -- scripts/common.sh@365 -- # decimal 2 00:05:27.695 00:39:40 -- scripts/common.sh@352 -- # local d=2 00:05:27.695 00:39:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.695 00:39:40 -- scripts/common.sh@354 -- # echo 2 00:05:27.695 00:39:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.695 00:39:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.695 00:39:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.695 00:39:40 -- scripts/common.sh@367 -- # return 0 00:05:27.695 00:39:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.695 00:39:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.695 --rc genhtml_branch_coverage=1 00:05:27.695 --rc genhtml_function_coverage=1 00:05:27.695 --rc genhtml_legend=1 00:05:27.695 --rc geninfo_all_blocks=1 00:05:27.695 --rc geninfo_unexecuted_blocks=1 00:05:27.695 00:05:27.695 ' 00:05:27.695 00:39:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.695 --rc genhtml_branch_coverage=1 00:05:27.695 --rc genhtml_function_coverage=1 00:05:27.695 --rc genhtml_legend=1 00:05:27.695 --rc geninfo_all_blocks=1 00:05:27.695 --rc geninfo_unexecuted_blocks=1 00:05:27.695 00:05:27.695 ' 00:05:27.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.695 00:39:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.695 --rc genhtml_branch_coverage=1 00:05:27.695 --rc genhtml_function_coverage=1 00:05:27.695 --rc genhtml_legend=1 00:05:27.695 --rc geninfo_all_blocks=1 00:05:27.695 --rc geninfo_unexecuted_blocks=1 00:05:27.695 00:05:27.695 ' 00:05:27.695 00:39:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.695 --rc genhtml_branch_coverage=1 00:05:27.695 --rc genhtml_function_coverage=1 00:05:27.695 --rc genhtml_legend=1 00:05:27.695 --rc geninfo_all_blocks=1 00:05:27.695 --rc geninfo_unexecuted_blocks=1 00:05:27.695 00:05:27.695 ' 00:05:27.695 00:39:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:27.695 00:39:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68554 00:05:27.695 00:39:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68554 00:05:27.695 00:39:40 -- common/autotest_common.sh@829 -- # '[' -z 68554 ']' 00:05:27.695 00:39:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.695 00:39:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.695 00:39:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.695 00:39:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.695 00:39:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.695 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.695 [2024-12-03 00:39:40.077951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.695 [2024-12-03 00:39:40.078632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68554 ] 00:05:27.952 [2024-12-03 00:39:40.211245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.952 [2024-12-03 00:39:40.266568] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.952 [2024-12-03 00:39:40.266993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.907 00:39:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.907 00:39:41 -- common/autotest_common.sh@862 -- # return 0 00:05:28.907 00:39:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.907 00:39:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.907 00:39:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.907 00:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.907 { 00:05:28.907 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.907 } 00:05:28.907 00:39:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.907 00:39:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.907 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:28.907 1 heaps totaling size 814.000000 MiB 00:05:28.907 size: 814.000000 MiB heap id: 0 00:05:28.907 end heaps---------- 00:05:28.907 8 mempools totaling size 598.116089 MiB 00:05:28.907 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.907 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.907 size: 84.521057 MiB name: bdev_io_68554 00:05:28.907 size: 51.011292 MiB name: evtpool_68554 00:05:28.907 size: 50.003479 MiB name: msgpool_68554 00:05:28.907 size: 21.763794 MiB name: PDU_Pool 00:05:28.907 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.907 size: 0.026123 MiB name: Session_Pool 00:05:28.907 end mempools------- 00:05:28.907 6 memzones totaling size 4.142822 MiB 00:05:28.907 size: 1.000366 MiB name: RG_ring_0_68554 00:05:28.907 size: 1.000366 MiB name: RG_ring_1_68554 00:05:28.907 size: 1.000366 MiB name: RG_ring_4_68554 00:05:28.907 size: 1.000366 MiB name: RG_ring_5_68554 00:05:28.907 size: 0.125366 MiB name: RG_ring_2_68554 00:05:28.907 size: 0.015991 MiB name: RG_ring_3_68554 00:05:28.907 end memzones------- 00:05:28.907 00:39:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.907 heap id: 0 total size: 814.000000 MiB number of busy elements: 215 number of free elements: 15 00:05:28.907 list of free elements. size: 12.487488 MiB 00:05:28.907 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:28.907 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:28.907 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:28.907 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:28.907 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:28.907 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:28.907 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:28.907 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:28.907 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:28.907 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:28.907 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:28.907 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:28.907 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:28.907 element at address: 0x200027e00000 with size: 0.398132 MiB 00:05:28.907 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:28.907 list of standard malloc elements. size: 199.249939 MiB 00:05:28.907 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:28.907 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:28.907 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:28.907 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:28.907 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.907 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.907 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:28.907 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.907 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:28.907 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.907 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:28.907 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:28.907 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:28.907 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:28.907 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:28.907 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:28.907 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:28.907 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:28.907 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:28.908 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:28.908 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:28.909 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:28.909 list of memzone associated elements. size: 602.262573 MiB 00:05:28.909 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:28.909 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.909 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:28.909 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.909 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:28.909 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68554_0 00:05:28.909 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:28.909 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68554_0 00:05:28.909 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:28.909 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68554_0 00:05:28.909 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:28.909 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.909 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:28.909 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.909 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:28.909 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68554 00:05:28.909 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:28.909 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68554 00:05:28.909 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.909 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68554 00:05:28.909 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:28.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.909 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:28.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.909 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:28.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.909 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:28.909 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.909 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:28.909 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68554 00:05:28.909 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:28.909 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68554 00:05:28.909 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:28.909 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68554 00:05:28.909 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:28.909 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68554 00:05:28.909 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:28.909 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68554 00:05:28.909 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:28.909 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.909 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:28.909 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.909 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:28.909 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.909 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:28.909 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68554 00:05:28.909 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:28.909 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.909 element at address: 0x200027e66040 with size: 0.023743 MiB 00:05:28.909 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.909 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:28.909 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68554 00:05:28.909 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:05:28.909 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.909 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:28.909 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68554 00:05:28.909 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:28.909 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68554 00:05:28.909 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:05:28.909 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.909 00:39:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.909 00:39:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68554 00:05:28.909 00:39:41 -- common/autotest_common.sh@936 -- # '[' -z 68554 ']' 00:05:28.909 00:39:41 -- common/autotest_common.sh@940 -- # kill -0 68554 00:05:28.909 00:39:41 -- common/autotest_common.sh@941 -- # uname 00:05:28.909 00:39:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.909 00:39:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68554 00:05:28.909 killing process with pid 68554 00:05:28.909 00:39:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.909 00:39:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.909 00:39:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68554' 00:05:28.909 00:39:41 -- common/autotest_common.sh@955 -- # kill 68554 00:05:28.909 00:39:41 -- common/autotest_common.sh@960 -- # wait 68554 00:05:29.169 00:05:29.169 real 0m1.789s 00:05:29.169 user 0m1.981s 00:05:29.169 sys 0m0.451s 00:05:29.169 ************************************ 00:05:29.169 END TEST dpdk_mem_utility 00:05:29.169 ************************************ 00:05:29.169 00:39:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.169 00:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:29.169 00:39:41 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.169 00:39:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.169 00:39:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.169 00:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:29.169 ************************************ 00:05:29.169 START TEST event 00:05:29.169 ************************************ 00:05:29.169 00:39:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.427 * Looking for test storage... 00:05:29.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.428 00:39:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.428 00:39:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.428 00:39:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.428 00:39:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.428 00:39:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.428 00:39:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.428 00:39:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.428 00:39:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.428 00:39:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.428 00:39:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.428 00:39:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.428 00:39:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.428 00:39:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.428 00:39:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.428 00:39:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.428 00:39:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.428 00:39:41 -- scripts/common.sh@344 -- # : 1 00:05:29.428 00:39:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.428 00:39:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.428 00:39:41 -- scripts/common.sh@364 -- # decimal 1 00:05:29.428 00:39:41 -- scripts/common.sh@352 -- # local d=1 00:05:29.428 00:39:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.428 00:39:41 -- scripts/common.sh@354 -- # echo 1 00:05:29.428 00:39:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.428 00:39:41 -- scripts/common.sh@365 -- # decimal 2 00:05:29.428 00:39:41 -- scripts/common.sh@352 -- # local d=2 00:05:29.428 00:39:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.428 00:39:41 -- scripts/common.sh@354 -- # echo 2 00:05:29.428 00:39:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.428 00:39:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.428 00:39:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.428 00:39:41 -- scripts/common.sh@367 -- # return 0 00:05:29.428 00:39:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.428 00:39:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.428 --rc genhtml_branch_coverage=1 00:05:29.428 --rc genhtml_function_coverage=1 00:05:29.428 --rc genhtml_legend=1 00:05:29.428 --rc geninfo_all_blocks=1 00:05:29.428 --rc geninfo_unexecuted_blocks=1 00:05:29.428 00:05:29.428 ' 00:05:29.428 00:39:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.428 --rc genhtml_branch_coverage=1 00:05:29.428 --rc genhtml_function_coverage=1 00:05:29.428 --rc genhtml_legend=1 00:05:29.428 --rc geninfo_all_blocks=1 00:05:29.428 --rc geninfo_unexecuted_blocks=1 00:05:29.428 00:05:29.428 ' 00:05:29.428 00:39:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.428 --rc genhtml_branch_coverage=1 00:05:29.428 --rc genhtml_function_coverage=1 00:05:29.428 --rc genhtml_legend=1 00:05:29.428 --rc geninfo_all_blocks=1 00:05:29.428 --rc geninfo_unexecuted_blocks=1 00:05:29.428 00:05:29.428 ' 00:05:29.428 00:39:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.428 --rc genhtml_branch_coverage=1 00:05:29.428 --rc genhtml_function_coverage=1 00:05:29.428 --rc genhtml_legend=1 00:05:29.428 --rc geninfo_all_blocks=1 00:05:29.428 --rc geninfo_unexecuted_blocks=1 00:05:29.428 00:05:29.428 ' 00:05:29.428 00:39:41 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:29.428 00:39:41 -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.428 00:39:41 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.428 00:39:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:29.428 00:39:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.428 00:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:29.428 ************************************ 00:05:29.428 START TEST event_perf 00:05:29.428 ************************************ 00:05:29.428 00:39:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.428 Running I/O for 1 seconds...[2024-12-03 00:39:41.878277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.428 [2024-12-03 00:39:41.878377] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68656 ] 00:05:29.687 [2024-12-03 00:39:42.014776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.687 [2024-12-03 00:39:42.071471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.687 [2024-12-03 00:39:42.071594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.687 [2024-12-03 00:39:42.071753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.687 Running I/O for 1 seconds...[2024-12-03 00:39:42.071754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.063 00:05:31.063 lcore 0: 132896 00:05:31.063 lcore 1: 132895 00:05:31.063 lcore 2: 132897 00:05:31.063 lcore 3: 132895 00:05:31.063 done. 00:05:31.063 00:05:31.063 real 0m1.313s 00:05:31.063 user 0m4.132s 00:05:31.063 sys 0m0.061s 00:05:31.063 00:39:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.063 00:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.063 ************************************ 00:05:31.063 END TEST event_perf 00:05:31.063 ************************************ 00:05:31.063 00:39:43 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.063 00:39:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:31.063 00:39:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.063 00:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.063 ************************************ 00:05:31.063 START TEST event_reactor 00:05:31.063 ************************************ 00:05:31.063 00:39:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.063 [2024-12-03 00:39:43.244636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.063 [2024-12-03 00:39:43.244702] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68689 ] 00:05:31.063 [2024-12-03 00:39:43.377241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.063 [2024-12-03 00:39:43.448317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.998 test_start 00:05:31.998 oneshot 00:05:31.998 tick 100 00:05:31.998 tick 100 00:05:31.998 tick 250 00:05:31.998 tick 100 00:05:31.998 tick 100 00:05:31.998 tick 250 00:05:31.998 tick 500 00:05:31.998 tick 100 00:05:31.998 tick 100 00:05:31.998 tick 100 00:05:31.998 tick 250 00:05:31.998 tick 100 00:05:31.998 tick 100 00:05:31.998 test_end 00:05:31.998 00:05:31.998 real 0m1.280s 00:05:31.998 user 0m1.119s 00:05:31.998 sys 0m0.055s 00:05:31.998 00:39:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.998 ************************************ 00:05:31.998 END TEST event_reactor 00:05:31.998 ************************************ 00:05:31.998 00:39:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.256 00:39:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.256 00:39:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:32.256 00:39:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.256 00:39:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.256 ************************************ 00:05:32.256 START TEST event_reactor_perf 00:05:32.256 ************************************ 00:05:32.256 00:39:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.256 [2024-12-03 00:39:44.579177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.256 [2024-12-03 00:39:44.579262] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68719 ] 00:05:32.256 [2024-12-03 00:39:44.713541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.256 [2024-12-03 00:39:44.771381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.630 test_start 00:05:33.630 test_end 00:05:33.630 Performance: 478930 events per second 00:05:33.630 00:05:33.630 real 0m1.274s 00:05:33.630 user 0m1.113s 00:05:33.630 sys 0m0.055s 00:05:33.630 00:39:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.630 00:39:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.630 ************************************ 00:05:33.630 END TEST event_reactor_perf 00:05:33.630 ************************************ 00:05:33.630 00:39:45 -- event/event.sh@49 -- # uname -s 00:05:33.630 00:39:45 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.630 00:39:45 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.630 00:39:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.630 00:39:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.630 00:39:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.630 ************************************ 00:05:33.630 START TEST event_scheduler 00:05:33.630 ************************************ 00:05:33.630 00:39:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.630 * Looking for test storage... 00:05:33.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:33.630 00:39:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.630 00:39:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.630 00:39:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.630 00:39:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.630 00:39:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.630 00:39:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.630 00:39:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.630 00:39:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.630 00:39:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.630 00:39:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.630 00:39:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.630 00:39:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.630 00:39:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.630 00:39:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.630 00:39:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.630 00:39:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.630 00:39:46 -- scripts/common.sh@344 -- # : 1 00:05:33.630 00:39:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.630 00:39:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.630 00:39:46 -- scripts/common.sh@364 -- # decimal 1 00:05:33.630 00:39:46 -- scripts/common.sh@352 -- # local d=1 00:05:33.631 00:39:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.631 00:39:46 -- scripts/common.sh@354 -- # echo 1 00:05:33.631 00:39:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.631 00:39:46 -- scripts/common.sh@365 -- # decimal 2 00:05:33.631 00:39:46 -- scripts/common.sh@352 -- # local d=2 00:05:33.631 00:39:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.631 00:39:46 -- scripts/common.sh@354 -- # echo 2 00:05:33.631 00:39:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.631 00:39:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.631 00:39:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.631 00:39:46 -- scripts/common.sh@367 -- # return 0 00:05:33.631 00:39:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.631 00:39:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.631 --rc genhtml_branch_coverage=1 00:05:33.631 --rc genhtml_function_coverage=1 00:05:33.631 --rc genhtml_legend=1 00:05:33.631 --rc geninfo_all_blocks=1 00:05:33.631 --rc geninfo_unexecuted_blocks=1 00:05:33.631 00:05:33.631 ' 00:05:33.631 00:39:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.631 --rc genhtml_branch_coverage=1 00:05:33.631 --rc genhtml_function_coverage=1 00:05:33.631 --rc genhtml_legend=1 00:05:33.631 --rc geninfo_all_blocks=1 00:05:33.631 --rc geninfo_unexecuted_blocks=1 00:05:33.631 00:05:33.631 ' 00:05:33.631 00:39:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.631 --rc genhtml_branch_coverage=1 00:05:33.631 --rc genhtml_function_coverage=1 00:05:33.631 --rc genhtml_legend=1 00:05:33.631 --rc geninfo_all_blocks=1 00:05:33.631 --rc geninfo_unexecuted_blocks=1 00:05:33.631 00:05:33.631 ' 00:05:33.631 00:39:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.631 --rc genhtml_branch_coverage=1 00:05:33.631 --rc genhtml_function_coverage=1 00:05:33.631 --rc genhtml_legend=1 00:05:33.631 --rc geninfo_all_blocks=1 00:05:33.631 --rc geninfo_unexecuted_blocks=1 00:05:33.631 00:05:33.631 ' 00:05:33.631 00:39:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.631 00:39:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68793 00:05:33.631 00:39:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.631 00:39:46 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.631 00:39:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 68793 00:05:33.631 00:39:46 -- common/autotest_common.sh@829 -- # '[' -z 68793 ']' 00:05:33.631 00:39:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.631 00:39:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.631 00:39:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.631 00:39:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.631 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.631 [2024-12-03 00:39:46.141704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.631 [2024-12-03 00:39:46.141811] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68793 ] 00:05:33.890 [2024-12-03 00:39:46.284150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.890 [2024-12-03 00:39:46.360680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.890 [2024-12-03 00:39:46.360817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.890 [2024-12-03 00:39:46.360966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.890 [2024-12-03 00:39:46.360969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.890 00:39:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.890 00:39:46 -- common/autotest_common.sh@862 -- # return 0 00:05:33.890 00:39:46 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.890 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.890 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.148 POWER: Env isn't set yet! 00:05:34.148 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:34.148 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.148 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.148 POWER: Attempting to initialise PSTAT power management... 00:05:34.148 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.148 POWER: Cannot set governor of lcore 0 to performance 00:05:34.148 POWER: Attempting to initialise AMD PSTATE power management... 00:05:34.148 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.148 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.148 POWER: Attempting to initialise CPPC power management... 00:05:34.148 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.148 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.148 POWER: Attempting to initialise VM power management... 00:05:34.148 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:34.148 POWER: Unable to set Power Management Environment for lcore 0 00:05:34.148 [2024-12-03 00:39:46.409926] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:34.148 [2024-12-03 00:39:46.409942] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:34.148 [2024-12-03 00:39:46.409953] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.148 [2024-12-03 00:39:46.409968] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.148 [2024-12-03 00:39:46.409978] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.148 [2024-12-03 00:39:46.409987] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.148 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.148 00:39:46 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.148 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.148 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.148 [2024-12-03 00:39:46.509881] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.148 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.148 00:39:46 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.148 00:39:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.149 00:39:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 ************************************ 00:05:34.149 START TEST scheduler_create_thread 00:05:34.149 ************************************ 00:05:34.149 00:39:46 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 2 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 3 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 4 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 5 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 6 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 7 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 8 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 9 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 10 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.149 00:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.149 00:39:46 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.149 00:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.149 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:36.051 00:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.051 00:39:48 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:36.051 00:39:48 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:36.051 00:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.051 00:39:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.619 00:39:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.619 00:05:36.619 real 0m2.611s 00:05:36.619 user 0m0.020s 00:05:36.619 sys 0m0.005s 00:05:36.619 00:39:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.619 ************************************ 00:05:36.619 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.619 END TEST scheduler_create_thread 00:05:36.619 ************************************ 00:05:36.878 00:39:49 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.878 00:39:49 -- scheduler/scheduler.sh@46 -- # killprocess 68793 00:05:36.878 00:39:49 -- common/autotest_common.sh@936 -- # '[' -z 68793 ']' 00:05:36.878 00:39:49 -- common/autotest_common.sh@940 -- # kill -0 68793 00:05:36.878 00:39:49 -- common/autotest_common.sh@941 -- # uname 00:05:36.878 00:39:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.878 00:39:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68793 00:05:36.878 00:39:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:36.878 00:39:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:36.878 00:39:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68793' 00:05:36.878 killing process with pid 68793 00:05:36.878 00:39:49 -- common/autotest_common.sh@955 -- # kill 68793 00:05:36.878 00:39:49 -- common/autotest_common.sh@960 -- # wait 68793 00:05:37.137 [2024-12-03 00:39:49.613162] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.396 00:05:37.396 real 0m3.917s 00:05:37.396 user 0m5.719s 00:05:37.396 sys 0m0.351s 00:05:37.396 00:39:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.396 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 ************************************ 00:05:37.396 END TEST event_scheduler 00:05:37.396 ************************************ 00:05:37.396 00:39:49 -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.396 00:39:49 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.396 00:39:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.396 00:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.396 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 ************************************ 00:05:37.396 START TEST app_repeat 00:05:37.396 ************************************ 00:05:37.396 00:39:49 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:37.396 00:39:49 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.396 00:39:49 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.396 00:39:49 -- event/event.sh@13 -- # local nbd_list 00:05:37.396 00:39:49 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.396 00:39:49 -- event/event.sh@14 -- # local bdev_list 00:05:37.396 00:39:49 -- event/event.sh@15 -- # local repeat_times=4 00:05:37.396 00:39:49 -- event/event.sh@17 -- # modprobe nbd 00:05:37.396 00:39:49 -- event/event.sh@19 -- # repeat_pid=68892 00:05:37.396 00:39:49 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.396 00:39:49 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.396 Process app_repeat pid: 68892 00:05:37.396 00:39:49 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68892' 00:05:37.396 00:39:49 -- event/event.sh@23 -- # for i in {0..2} 00:05:37.396 spdk_app_start Round 0 00:05:37.396 00:39:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.396 00:39:49 -- event/event.sh@25 -- # waitforlisten 68892 /var/tmp/spdk-nbd.sock 00:05:37.396 00:39:49 -- common/autotest_common.sh@829 -- # '[' -z 68892 ']' 00:05:37.396 00:39:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.396 00:39:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.396 00:39:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.396 00:39:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.396 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 [2024-12-03 00:39:49.901818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.396 [2024-12-03 00:39:49.901928] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68892 ] 00:05:37.655 [2024-12-03 00:39:50.040573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.655 [2024-12-03 00:39:50.126772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.655 [2024-12-03 00:39:50.126797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.591 00:39:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.591 00:39:50 -- common/autotest_common.sh@862 -- # return 0 00:05:38.591 00:39:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.850 Malloc0 00:05:38.850 00:39:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.108 Malloc1 00:05:39.108 00:39:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.108 00:39:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.108 00:39:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@12 -- # local i 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.109 00:39:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.367 /dev/nbd0 00:05:39.367 00:39:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.367 00:39:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.367 00:39:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.367 00:39:51 -- common/autotest_common.sh@867 -- # local i 00:05:39.367 00:39:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.367 00:39:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.367 00:39:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.367 00:39:51 -- common/autotest_common.sh@871 -- # break 00:05:39.367 00:39:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.367 00:39:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.367 00:39:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.367 1+0 records in 00:05:39.367 1+0 records out 00:05:39.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341114 s, 12.0 MB/s 00:05:39.367 00:39:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.367 00:39:51 -- common/autotest_common.sh@884 -- # size=4096 00:05:39.367 00:39:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.367 00:39:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.367 00:39:51 -- common/autotest_common.sh@887 -- # return 0 00:05:39.367 00:39:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.367 00:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.367 00:39:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.626 /dev/nbd1 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.626 00:39:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:39.626 00:39:51 -- common/autotest_common.sh@867 -- # local i 00:05:39.626 00:39:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.626 00:39:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.626 00:39:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:39.626 00:39:51 -- common/autotest_common.sh@871 -- # break 00:05:39.626 00:39:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.626 00:39:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.626 00:39:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.626 1+0 records in 00:05:39.626 1+0 records out 00:05:39.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228271 s, 17.9 MB/s 00:05:39.626 00:39:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.626 00:39:51 -- common/autotest_common.sh@884 -- # size=4096 00:05:39.626 00:39:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.626 00:39:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.626 00:39:51 -- common/autotest_common.sh@887 -- # return 0 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.626 00:39:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.883 { 00:05:39.883 "bdev_name": "Malloc0", 00:05:39.883 "nbd_device": "/dev/nbd0" 00:05:39.883 }, 00:05:39.883 { 00:05:39.883 "bdev_name": "Malloc1", 00:05:39.883 "nbd_device": "/dev/nbd1" 00:05:39.883 } 00:05:39.883 ]' 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.883 { 00:05:39.883 "bdev_name": "Malloc0", 00:05:39.883 "nbd_device": "/dev/nbd0" 00:05:39.883 }, 00:05:39.883 { 00:05:39.883 "bdev_name": "Malloc1", 00:05:39.883 "nbd_device": "/dev/nbd1" 00:05:39.883 } 00:05:39.883 ]' 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.883 /dev/nbd1' 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.883 /dev/nbd1' 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.883 00:39:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.884 256+0 records in 00:05:39.884 256+0 records out 00:05:39.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106437 s, 98.5 MB/s 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.884 256+0 records in 00:05:39.884 256+0 records out 00:05:39.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259116 s, 40.5 MB/s 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.884 256+0 records in 00:05:39.884 256+0 records out 00:05:39.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261901 s, 40.0 MB/s 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.884 00:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.142 00:39:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@41 -- # break 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.400 00:39:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@41 -- # break 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.658 00:39:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.658 00:39:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.658 00:39:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.658 00:39:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@65 -- # true 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.916 00:39:53 -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.916 00:39:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.174 00:39:53 -- event/event.sh@35 -- # sleep 3 00:05:41.174 [2024-12-03 00:39:53.660770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.431 [2024-12-03 00:39:53.704160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.431 [2024-12-03 00:39:53.704177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.431 [2024-12-03 00:39:53.755550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.431 [2024-12-03 00:39:53.755649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.714 spdk_app_start Round 1 00:05:44.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.714 00:39:56 -- event/event.sh@23 -- # for i in {0..2} 00:05:44.714 00:39:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.714 00:39:56 -- event/event.sh@25 -- # waitforlisten 68892 /var/tmp/spdk-nbd.sock 00:05:44.714 00:39:56 -- common/autotest_common.sh@829 -- # '[' -z 68892 ']' 00:05:44.714 00:39:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.714 00:39:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.714 00:39:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.714 00:39:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.714 00:39:56 -- common/autotest_common.sh@10 -- # set +x 00:05:44.714 00:39:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.714 00:39:56 -- common/autotest_common.sh@862 -- # return 0 00:05:44.714 00:39:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.714 Malloc0 00:05:44.714 00:39:57 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.974 Malloc1 00:05:44.975 00:39:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@12 -- # local i 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.975 00:39:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.234 /dev/nbd0 00:05:45.234 00:39:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.234 00:39:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.234 00:39:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.234 00:39:57 -- common/autotest_common.sh@867 -- # local i 00:05:45.234 00:39:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.234 00:39:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.234 00:39:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.234 00:39:57 -- common/autotest_common.sh@871 -- # break 00:05:45.234 00:39:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.234 00:39:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.234 00:39:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.234 1+0 records in 00:05:45.234 1+0 records out 00:05:45.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297607 s, 13.8 MB/s 00:05:45.234 00:39:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.234 00:39:57 -- common/autotest_common.sh@884 -- # size=4096 00:05:45.234 00:39:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.234 00:39:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.234 00:39:57 -- common/autotest_common.sh@887 -- # return 0 00:05:45.234 00:39:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.234 00:39:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.234 00:39:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.493 /dev/nbd1 00:05:45.493 00:39:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.494 00:39:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.494 00:39:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:45.494 00:39:57 -- common/autotest_common.sh@867 -- # local i 00:05:45.494 00:39:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.494 00:39:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.494 00:39:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:45.494 00:39:57 -- common/autotest_common.sh@871 -- # break 00:05:45.494 00:39:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.494 00:39:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.494 00:39:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.494 1+0 records in 00:05:45.494 1+0 records out 00:05:45.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368874 s, 11.1 MB/s 00:05:45.494 00:39:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.494 00:39:57 -- common/autotest_common.sh@884 -- # size=4096 00:05:45.494 00:39:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.494 00:39:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.494 00:39:57 -- common/autotest_common.sh@887 -- # return 0 00:05:45.494 00:39:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.494 00:39:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.494 00:39:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.494 00:39:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.494 00:39:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.753 { 00:05:45.753 "bdev_name": "Malloc0", 00:05:45.753 "nbd_device": "/dev/nbd0" 00:05:45.753 }, 00:05:45.753 { 00:05:45.753 "bdev_name": "Malloc1", 00:05:45.753 "nbd_device": "/dev/nbd1" 00:05:45.753 } 00:05:45.753 ]' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.753 { 00:05:45.753 "bdev_name": "Malloc0", 00:05:45.753 "nbd_device": "/dev/nbd0" 00:05:45.753 }, 00:05:45.753 { 00:05:45.753 "bdev_name": "Malloc1", 00:05:45.753 "nbd_device": "/dev/nbd1" 00:05:45.753 } 00:05:45.753 ]' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.753 /dev/nbd1' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.753 /dev/nbd1' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.753 256+0 records in 00:05:45.753 256+0 records out 00:05:45.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669614 s, 157 MB/s 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.753 256+0 records in 00:05:45.753 256+0 records out 00:05:45.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244918 s, 42.8 MB/s 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.753 00:39:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.012 256+0 records in 00:05:46.012 256+0 records out 00:05:46.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291771 s, 35.9 MB/s 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.012 00:39:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@41 -- # break 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@41 -- # break 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.271 00:39:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@65 -- # true 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.840 00:39:59 -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.840 00:39:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.099 00:39:59 -- event/event.sh@35 -- # sleep 3 00:05:47.099 [2024-12-03 00:39:59.584401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.357 [2024-12-03 00:39:59.628002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.357 [2024-12-03 00:39:59.628018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.357 [2024-12-03 00:39:59.678963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.357 [2024-12-03 00:39:59.679021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.643 spdk_app_start Round 2 00:05:50.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.643 00:40:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:50.643 00:40:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.643 00:40:02 -- event/event.sh@25 -- # waitforlisten 68892 /var/tmp/spdk-nbd.sock 00:05:50.643 00:40:02 -- common/autotest_common.sh@829 -- # '[' -z 68892 ']' 00:05:50.643 00:40:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.643 00:40:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.643 00:40:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.643 00:40:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.643 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:05:50.643 00:40:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.643 00:40:02 -- common/autotest_common.sh@862 -- # return 0 00:05:50.643 00:40:02 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.643 Malloc0 00:05:50.643 00:40:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.902 Malloc1 00:05:50.902 00:40:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.902 00:40:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.161 /dev/nbd0 00:05:51.161 00:40:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.161 00:40:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.161 00:40:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:51.161 00:40:03 -- common/autotest_common.sh@867 -- # local i 00:05:51.161 00:40:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:51.161 00:40:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:51.161 00:40:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:51.161 00:40:03 -- common/autotest_common.sh@871 -- # break 00:05:51.161 00:40:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:51.161 00:40:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:51.161 00:40:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.161 1+0 records in 00:05:51.161 1+0 records out 00:05:51.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301881 s, 13.6 MB/s 00:05:51.161 00:40:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.161 00:40:03 -- common/autotest_common.sh@884 -- # size=4096 00:05:51.161 00:40:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.161 00:40:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:51.161 00:40:03 -- common/autotest_common.sh@887 -- # return 0 00:05:51.161 00:40:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.161 00:40:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.161 00:40:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.420 /dev/nbd1 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.420 00:40:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:51.420 00:40:03 -- common/autotest_common.sh@867 -- # local i 00:05:51.420 00:40:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:51.420 00:40:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:51.420 00:40:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:51.420 00:40:03 -- common/autotest_common.sh@871 -- # break 00:05:51.420 00:40:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:51.420 00:40:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:51.420 00:40:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.420 1+0 records in 00:05:51.420 1+0 records out 00:05:51.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337257 s, 12.1 MB/s 00:05:51.420 00:40:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.420 00:40:03 -- common/autotest_common.sh@884 -- # size=4096 00:05:51.420 00:40:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.420 00:40:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:51.420 00:40:03 -- common/autotest_common.sh@887 -- # return 0 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.420 00:40:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.679 { 00:05:51.679 "bdev_name": "Malloc0", 00:05:51.679 "nbd_device": "/dev/nbd0" 00:05:51.679 }, 00:05:51.679 { 00:05:51.679 "bdev_name": "Malloc1", 00:05:51.679 "nbd_device": "/dev/nbd1" 00:05:51.679 } 00:05:51.679 ]' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.679 { 00:05:51.679 "bdev_name": "Malloc0", 00:05:51.679 "nbd_device": "/dev/nbd0" 00:05:51.679 }, 00:05:51.679 { 00:05:51.679 "bdev_name": "Malloc1", 00:05:51.679 "nbd_device": "/dev/nbd1" 00:05:51.679 } 00:05:51.679 ]' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.679 /dev/nbd1' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.679 /dev/nbd1' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.679 256+0 records in 00:05:51.679 256+0 records out 00:05:51.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105319 s, 99.6 MB/s 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.679 256+0 records in 00:05:51.679 256+0 records out 00:05:51.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024844 s, 42.2 MB/s 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.679 00:40:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.938 256+0 records in 00:05:51.938 256+0 records out 00:05:51.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029975 s, 35.0 MB/s 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.938 00:40:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@41 -- # break 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.197 00:40:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@41 -- # break 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.456 00:40:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@65 -- # true 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.730 00:40:04 -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.730 00:40:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.730 00:40:05 -- event/event.sh@35 -- # sleep 3 00:05:52.992 [2024-12-03 00:40:05.376913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.992 [2024-12-03 00:40:05.418580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.992 [2024-12-03 00:40:05.418591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.992 [2024-12-03 00:40:05.470086] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.992 [2024-12-03 00:40:05.470145] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.332 00:40:08 -- event/event.sh@38 -- # waitforlisten 68892 /var/tmp/spdk-nbd.sock 00:05:56.332 00:40:08 -- common/autotest_common.sh@829 -- # '[' -z 68892 ']' 00:05:56.332 00:40:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.332 00:40:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.332 00:40:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.332 00:40:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.332 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.332 00:40:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.332 00:40:08 -- common/autotest_common.sh@862 -- # return 0 00:05:56.332 00:40:08 -- event/event.sh@39 -- # killprocess 68892 00:05:56.332 00:40:08 -- common/autotest_common.sh@936 -- # '[' -z 68892 ']' 00:05:56.332 00:40:08 -- common/autotest_common.sh@940 -- # kill -0 68892 00:05:56.332 00:40:08 -- common/autotest_common.sh@941 -- # uname 00:05:56.332 00:40:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.332 00:40:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68892 00:05:56.332 killing process with pid 68892 00:05:56.332 00:40:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.332 00:40:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.332 00:40:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68892' 00:05:56.332 00:40:08 -- common/autotest_common.sh@955 -- # kill 68892 00:05:56.332 00:40:08 -- common/autotest_common.sh@960 -- # wait 68892 00:05:56.332 spdk_app_start is called in Round 0. 00:05:56.332 Shutdown signal received, stop current app iteration 00:05:56.332 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:56.332 spdk_app_start is called in Round 1. 00:05:56.332 Shutdown signal received, stop current app iteration 00:05:56.332 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:56.332 spdk_app_start is called in Round 2. 00:05:56.332 Shutdown signal received, stop current app iteration 00:05:56.332 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:56.332 spdk_app_start is called in Round 3. 00:05:56.332 Shutdown signal received, stop current app iteration 00:05:56.332 ************************************ 00:05:56.332 END TEST app_repeat 00:05:56.332 ************************************ 00:05:56.332 00:40:08 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.332 00:40:08 -- event/event.sh@42 -- # return 0 00:05:56.332 00:05:56.332 real 0m18.826s 00:05:56.332 user 0m42.503s 00:05:56.332 sys 0m2.750s 00:05:56.332 00:40:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.332 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.332 00:40:08 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.332 00:40:08 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.332 00:40:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.332 00:40:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.332 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.332 ************************************ 00:05:56.332 START TEST cpu_locks 00:05:56.332 ************************************ 00:05:56.332 00:40:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.332 * Looking for test storage... 00:05:56.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.332 00:40:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.332 00:40:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.332 00:40:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.592 00:40:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.592 00:40:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.592 00:40:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.592 00:40:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.592 00:40:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.592 00:40:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.592 00:40:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.592 00:40:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.592 00:40:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.592 00:40:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.592 00:40:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.592 00:40:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.592 00:40:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.592 00:40:08 -- scripts/common.sh@344 -- # : 1 00:05:56.592 00:40:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.592 00:40:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.592 00:40:08 -- scripts/common.sh@364 -- # decimal 1 00:05:56.592 00:40:08 -- scripts/common.sh@352 -- # local d=1 00:05:56.592 00:40:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.592 00:40:08 -- scripts/common.sh@354 -- # echo 1 00:05:56.592 00:40:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.592 00:40:08 -- scripts/common.sh@365 -- # decimal 2 00:05:56.592 00:40:08 -- scripts/common.sh@352 -- # local d=2 00:05:56.592 00:40:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.592 00:40:08 -- scripts/common.sh@354 -- # echo 2 00:05:56.592 00:40:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.592 00:40:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.592 00:40:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.592 00:40:08 -- scripts/common.sh@367 -- # return 0 00:05:56.592 00:40:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.592 00:40:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.592 --rc genhtml_branch_coverage=1 00:05:56.592 --rc genhtml_function_coverage=1 00:05:56.592 --rc genhtml_legend=1 00:05:56.592 --rc geninfo_all_blocks=1 00:05:56.592 --rc geninfo_unexecuted_blocks=1 00:05:56.592 00:05:56.592 ' 00:05:56.592 00:40:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.592 --rc genhtml_branch_coverage=1 00:05:56.592 --rc genhtml_function_coverage=1 00:05:56.592 --rc genhtml_legend=1 00:05:56.592 --rc geninfo_all_blocks=1 00:05:56.592 --rc geninfo_unexecuted_blocks=1 00:05:56.592 00:05:56.592 ' 00:05:56.592 00:40:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.592 --rc genhtml_branch_coverage=1 00:05:56.592 --rc genhtml_function_coverage=1 00:05:56.592 --rc genhtml_legend=1 00:05:56.592 --rc geninfo_all_blocks=1 00:05:56.592 --rc geninfo_unexecuted_blocks=1 00:05:56.592 00:05:56.592 ' 00:05:56.592 00:40:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.592 --rc genhtml_branch_coverage=1 00:05:56.592 --rc genhtml_function_coverage=1 00:05:56.592 --rc genhtml_legend=1 00:05:56.592 --rc geninfo_all_blocks=1 00:05:56.592 --rc geninfo_unexecuted_blocks=1 00:05:56.592 00:05:56.592 ' 00:05:56.592 00:40:08 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.592 00:40:08 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.592 00:40:08 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.592 00:40:08 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.592 00:40:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.592 00:40:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.592 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 ************************************ 00:05:56.592 START TEST default_locks 00:05:56.592 ************************************ 00:05:56.592 00:40:08 -- common/autotest_common.sh@1114 -- # default_locks 00:05:56.592 00:40:08 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69523 00:05:56.592 00:40:08 -- event/cpu_locks.sh@47 -- # waitforlisten 69523 00:05:56.592 00:40:08 -- common/autotest_common.sh@829 -- # '[' -z 69523 ']' 00:05:56.592 00:40:08 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.592 00:40:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.592 00:40:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.592 00:40:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.592 00:40:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.592 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 [2024-12-03 00:40:09.013901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.592 [2024-12-03 00:40:09.014004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69523 ] 00:05:56.852 [2024-12-03 00:40:09.145313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.852 [2024-12-03 00:40:09.199276] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.852 [2024-12-03 00:40:09.199447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.420 00:40:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.420 00:40:09 -- common/autotest_common.sh@862 -- # return 0 00:05:57.420 00:40:09 -- event/cpu_locks.sh@49 -- # locks_exist 69523 00:05:57.420 00:40:09 -- event/cpu_locks.sh@22 -- # lslocks -p 69523 00:05:57.420 00:40:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.678 00:40:10 -- event/cpu_locks.sh@50 -- # killprocess 69523 00:05:57.679 00:40:10 -- common/autotest_common.sh@936 -- # '[' -z 69523 ']' 00:05:57.679 00:40:10 -- common/autotest_common.sh@940 -- # kill -0 69523 00:05:57.679 00:40:10 -- common/autotest_common.sh@941 -- # uname 00:05:57.679 00:40:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.679 00:40:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69523 00:05:57.937 killing process with pid 69523 00:05:57.937 00:40:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.937 00:40:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.937 00:40:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69523' 00:05:57.937 00:40:10 -- common/autotest_common.sh@955 -- # kill 69523 00:05:57.937 00:40:10 -- common/autotest_common.sh@960 -- # wait 69523 00:05:58.196 00:40:10 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69523 00:05:58.196 00:40:10 -- common/autotest_common.sh@650 -- # local es=0 00:05:58.196 00:40:10 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69523 00:05:58.196 00:40:10 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:58.196 00:40:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.196 00:40:10 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:58.196 00:40:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.196 00:40:10 -- common/autotest_common.sh@653 -- # waitforlisten 69523 00:05:58.196 00:40:10 -- common/autotest_common.sh@829 -- # '[' -z 69523 ']' 00:05:58.196 00:40:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.196 00:40:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.196 00:40:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.196 00:40:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.196 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:05:58.196 ERROR: process (pid: 69523) is no longer running 00:05:58.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69523) - No such process 00:05:58.196 00:40:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.196 00:40:10 -- common/autotest_common.sh@862 -- # return 1 00:05:58.196 00:40:10 -- common/autotest_common.sh@653 -- # es=1 00:05:58.196 00:40:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.196 00:40:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.196 00:40:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.196 00:40:10 -- event/cpu_locks.sh@54 -- # no_locks 00:05:58.196 00:40:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.196 00:40:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.196 00:40:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.196 00:05:58.196 real 0m1.758s 00:05:58.196 user 0m1.861s 00:05:58.196 sys 0m0.467s 00:05:58.196 00:40:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.196 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:05:58.196 ************************************ 00:05:58.196 END TEST default_locks 00:05:58.196 ************************************ 00:05:58.455 00:40:10 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.455 00:40:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.455 00:40:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.455 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:05:58.455 ************************************ 00:05:58.455 START TEST default_locks_via_rpc 00:05:58.455 ************************************ 00:05:58.455 00:40:10 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:58.455 00:40:10 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69582 00:05:58.455 00:40:10 -- event/cpu_locks.sh@63 -- # waitforlisten 69582 00:05:58.455 00:40:10 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.455 00:40:10 -- common/autotest_common.sh@829 -- # '[' -z 69582 ']' 00:05:58.455 00:40:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.455 00:40:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.455 00:40:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.455 00:40:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.455 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:05:58.455 [2024-12-03 00:40:10.820851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.455 [2024-12-03 00:40:10.820954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69582 ] 00:05:58.455 [2024-12-03 00:40:10.952942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.714 [2024-12-03 00:40:11.017802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.714 [2024-12-03 00:40:11.017988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.282 00:40:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.282 00:40:11 -- common/autotest_common.sh@862 -- # return 0 00:05:59.282 00:40:11 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:59.282 00:40:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.282 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.541 00:40:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.541 00:40:11 -- event/cpu_locks.sh@67 -- # no_locks 00:05:59.541 00:40:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.541 00:40:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.541 00:40:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.541 00:40:11 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.541 00:40:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.541 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.541 00:40:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.541 00:40:11 -- event/cpu_locks.sh@71 -- # locks_exist 69582 00:05:59.541 00:40:11 -- event/cpu_locks.sh@22 -- # lslocks -p 69582 00:05:59.541 00:40:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.799 00:40:12 -- event/cpu_locks.sh@73 -- # killprocess 69582 00:05:59.799 00:40:12 -- common/autotest_common.sh@936 -- # '[' -z 69582 ']' 00:05:59.799 00:40:12 -- common/autotest_common.sh@940 -- # kill -0 69582 00:05:59.799 00:40:12 -- common/autotest_common.sh@941 -- # uname 00:05:59.799 00:40:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.799 00:40:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69582 00:05:59.799 00:40:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.799 00:40:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.799 killing process with pid 69582 00:05:59.799 00:40:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69582' 00:05:59.799 00:40:12 -- common/autotest_common.sh@955 -- # kill 69582 00:05:59.799 00:40:12 -- common/autotest_common.sh@960 -- # wait 69582 00:06:00.366 00:06:00.366 real 0m1.985s 00:06:00.366 user 0m2.081s 00:06:00.366 sys 0m0.627s 00:06:00.366 00:40:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.366 00:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:00.366 ************************************ 00:06:00.366 END TEST default_locks_via_rpc 00:06:00.366 ************************************ 00:06:00.366 00:40:12 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.366 00:40:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.366 00:40:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.366 00:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:00.366 ************************************ 00:06:00.366 START TEST non_locking_app_on_locked_coremask 00:06:00.366 ************************************ 00:06:00.366 00:40:12 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:00.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.366 00:40:12 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69651 00:06:00.366 00:40:12 -- event/cpu_locks.sh@81 -- # waitforlisten 69651 /var/tmp/spdk.sock 00:06:00.366 00:40:12 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.367 00:40:12 -- common/autotest_common.sh@829 -- # '[' -z 69651 ']' 00:06:00.367 00:40:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.367 00:40:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.367 00:40:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.367 00:40:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.367 00:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:00.367 [2024-12-03 00:40:12.857734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.367 [2024-12-03 00:40:12.857833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69651 ] 00:06:00.625 [2024-12-03 00:40:12.993651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.625 [2024-12-03 00:40:13.053940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.625 [2024-12-03 00:40:13.054113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.559 00:40:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.559 00:40:13 -- common/autotest_common.sh@862 -- # return 0 00:06:01.559 00:40:13 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69679 00:06:01.559 00:40:13 -- event/cpu_locks.sh@85 -- # waitforlisten 69679 /var/tmp/spdk2.sock 00:06:01.559 00:40:13 -- common/autotest_common.sh@829 -- # '[' -z 69679 ']' 00:06:01.559 00:40:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.559 00:40:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.559 00:40:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.559 00:40:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.559 00:40:13 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:01.559 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.559 [2024-12-03 00:40:13.890347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.559 [2024-12-03 00:40:13.890449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69679 ] 00:06:01.559 [2024-12-03 00:40:14.028292] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.559 [2024-12-03 00:40:14.028339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.819 [2024-12-03 00:40:14.171290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.819 [2024-12-03 00:40:14.171482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.387 00:40:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.387 00:40:14 -- common/autotest_common.sh@862 -- # return 0 00:06:02.387 00:40:14 -- event/cpu_locks.sh@87 -- # locks_exist 69651 00:06:02.387 00:40:14 -- event/cpu_locks.sh@22 -- # lslocks -p 69651 00:06:02.387 00:40:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.953 00:40:15 -- event/cpu_locks.sh@89 -- # killprocess 69651 00:06:02.953 00:40:15 -- common/autotest_common.sh@936 -- # '[' -z 69651 ']' 00:06:02.953 00:40:15 -- common/autotest_common.sh@940 -- # kill -0 69651 00:06:02.953 00:40:15 -- common/autotest_common.sh@941 -- # uname 00:06:02.953 00:40:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.953 00:40:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69651 00:06:02.953 killing process with pid 69651 00:06:02.953 00:40:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.953 00:40:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.953 00:40:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69651' 00:06:02.953 00:40:15 -- common/autotest_common.sh@955 -- # kill 69651 00:06:02.953 00:40:15 -- common/autotest_common.sh@960 -- # wait 69651 00:06:03.888 00:40:16 -- event/cpu_locks.sh@90 -- # killprocess 69679 00:06:03.888 00:40:16 -- common/autotest_common.sh@936 -- # '[' -z 69679 ']' 00:06:03.888 00:40:16 -- common/autotest_common.sh@940 -- # kill -0 69679 00:06:03.888 00:40:16 -- common/autotest_common.sh@941 -- # uname 00:06:03.888 00:40:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.888 00:40:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69679 00:06:03.888 killing process with pid 69679 00:06:03.888 00:40:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.888 00:40:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.888 00:40:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69679' 00:06:03.888 00:40:16 -- common/autotest_common.sh@955 -- # kill 69679 00:06:03.888 00:40:16 -- common/autotest_common.sh@960 -- # wait 69679 00:06:04.456 00:06:04.456 real 0m4.066s 00:06:04.456 user 0m4.367s 00:06:04.456 sys 0m1.097s 00:06:04.456 00:40:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.456 ************************************ 00:06:04.456 END TEST non_locking_app_on_locked_coremask 00:06:04.456 ************************************ 00:06:04.456 00:40:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.456 00:40:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.456 00:40:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.456 00:40:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.456 00:40:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.456 ************************************ 00:06:04.456 START TEST locking_app_on_unlocked_coremask 00:06:04.456 ************************************ 00:06:04.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.456 00:40:16 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:04.456 00:40:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69758 00:06:04.456 00:40:16 -- event/cpu_locks.sh@99 -- # waitforlisten 69758 /var/tmp/spdk.sock 00:06:04.456 00:40:16 -- common/autotest_common.sh@829 -- # '[' -z 69758 ']' 00:06:04.456 00:40:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.456 00:40:16 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.456 00:40:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.456 00:40:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.456 00:40:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.456 00:40:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.715 [2024-12-03 00:40:16.974249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.715 [2024-12-03 00:40:16.974348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69758 ] 00:06:04.715 [2024-12-03 00:40:17.105538] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.715 [2024-12-03 00:40:17.105579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.715 [2024-12-03 00:40:17.172833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.715 [2024-12-03 00:40:17.173034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.652 00:40:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.652 00:40:17 -- common/autotest_common.sh@862 -- # return 0 00:06:05.652 00:40:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69786 00:06:05.652 00:40:17 -- event/cpu_locks.sh@103 -- # waitforlisten 69786 /var/tmp/spdk2.sock 00:06:05.652 00:40:17 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.652 00:40:17 -- common/autotest_common.sh@829 -- # '[' -z 69786 ']' 00:06:05.652 00:40:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.652 00:40:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.652 00:40:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.652 00:40:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.652 00:40:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 [2024-12-03 00:40:18.045222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.652 [2024-12-03 00:40:18.045994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69786 ] 00:06:05.910 [2024-12-03 00:40:18.196062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.910 [2024-12-03 00:40:18.333330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.910 [2024-12-03 00:40:18.333514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.847 00:40:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.847 00:40:19 -- common/autotest_common.sh@862 -- # return 0 00:06:06.847 00:40:19 -- event/cpu_locks.sh@105 -- # locks_exist 69786 00:06:06.847 00:40:19 -- event/cpu_locks.sh@22 -- # lslocks -p 69786 00:06:06.847 00:40:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.107 00:40:19 -- event/cpu_locks.sh@107 -- # killprocess 69758 00:06:07.107 00:40:19 -- common/autotest_common.sh@936 -- # '[' -z 69758 ']' 00:06:07.107 00:40:19 -- common/autotest_common.sh@940 -- # kill -0 69758 00:06:07.107 00:40:19 -- common/autotest_common.sh@941 -- # uname 00:06:07.107 00:40:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.107 00:40:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69758 00:06:07.107 00:40:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.107 00:40:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.107 killing process with pid 69758 00:06:07.107 00:40:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69758' 00:06:07.107 00:40:19 -- common/autotest_common.sh@955 -- # kill 69758 00:06:07.107 00:40:19 -- common/autotest_common.sh@960 -- # wait 69758 00:06:08.042 00:40:20 -- event/cpu_locks.sh@108 -- # killprocess 69786 00:06:08.043 00:40:20 -- common/autotest_common.sh@936 -- # '[' -z 69786 ']' 00:06:08.043 00:40:20 -- common/autotest_common.sh@940 -- # kill -0 69786 00:06:08.043 00:40:20 -- common/autotest_common.sh@941 -- # uname 00:06:08.043 00:40:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.043 00:40:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69786 00:06:08.043 00:40:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.043 killing process with pid 69786 00:06:08.043 00:40:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.043 00:40:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69786' 00:06:08.043 00:40:20 -- common/autotest_common.sh@955 -- # kill 69786 00:06:08.043 00:40:20 -- common/autotest_common.sh@960 -- # wait 69786 00:06:08.611 00:06:08.611 real 0m3.945s 00:06:08.611 user 0m4.314s 00:06:08.611 sys 0m1.147s 00:06:08.611 00:40:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.611 00:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:08.611 ************************************ 00:06:08.611 END TEST locking_app_on_unlocked_coremask 00:06:08.611 ************************************ 00:06:08.611 00:40:20 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.611 00:40:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.611 00:40:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.611 00:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:08.611 ************************************ 00:06:08.611 START TEST locking_app_on_locked_coremask 00:06:08.611 ************************************ 00:06:08.611 00:40:20 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:08.611 00:40:20 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69865 00:06:08.611 00:40:20 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.611 00:40:20 -- event/cpu_locks.sh@116 -- # waitforlisten 69865 /var/tmp/spdk.sock 00:06:08.611 00:40:20 -- common/autotest_common.sh@829 -- # '[' -z 69865 ']' 00:06:08.611 00:40:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.611 00:40:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.611 00:40:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.611 00:40:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.611 00:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:08.611 [2024-12-03 00:40:20.971765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.611 [2024-12-03 00:40:20.971863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:06:08.611 [2024-12-03 00:40:21.111876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.869 [2024-12-03 00:40:21.171443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.869 [2024-12-03 00:40:21.171615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.805 00:40:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.805 00:40:21 -- common/autotest_common.sh@862 -- # return 0 00:06:09.805 00:40:21 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69893 00:06:09.805 00:40:21 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.805 00:40:21 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69893 /var/tmp/spdk2.sock 00:06:09.805 00:40:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.805 00:40:21 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69893 /var/tmp/spdk2.sock 00:06:09.805 00:40:21 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.805 00:40:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.805 00:40:21 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.805 00:40:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.805 00:40:21 -- common/autotest_common.sh@653 -- # waitforlisten 69893 /var/tmp/spdk2.sock 00:06:09.805 00:40:21 -- common/autotest_common.sh@829 -- # '[' -z 69893 ']' 00:06:09.805 00:40:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.805 00:40:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.805 00:40:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.805 00:40:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.805 00:40:21 -- common/autotest_common.sh@10 -- # set +x 00:06:09.805 [2024-12-03 00:40:22.031240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.805 [2024-12-03 00:40:22.031357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69893 ] 00:06:09.805 [2024-12-03 00:40:22.169228] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69865 has claimed it. 00:06:09.805 [2024-12-03 00:40:22.169272] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.373 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69893) - No such process 00:06:10.373 ERROR: process (pid: 69893) is no longer running 00:06:10.373 00:40:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.373 00:40:22 -- common/autotest_common.sh@862 -- # return 1 00:06:10.373 00:40:22 -- common/autotest_common.sh@653 -- # es=1 00:06:10.373 00:40:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.373 00:40:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.373 00:40:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.373 00:40:22 -- event/cpu_locks.sh@122 -- # locks_exist 69865 00:06:10.373 00:40:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.373 00:40:22 -- event/cpu_locks.sh@22 -- # lslocks -p 69865 00:06:10.940 00:40:23 -- event/cpu_locks.sh@124 -- # killprocess 69865 00:06:10.940 00:40:23 -- common/autotest_common.sh@936 -- # '[' -z 69865 ']' 00:06:10.941 00:40:23 -- common/autotest_common.sh@940 -- # kill -0 69865 00:06:10.941 00:40:23 -- common/autotest_common.sh@941 -- # uname 00:06:10.941 00:40:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.941 00:40:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69865 00:06:10.941 00:40:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.941 00:40:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.941 killing process with pid 69865 00:06:10.941 00:40:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69865' 00:06:10.941 00:40:23 -- common/autotest_common.sh@955 -- # kill 69865 00:06:10.941 00:40:23 -- common/autotest_common.sh@960 -- # wait 69865 00:06:11.507 00:06:11.507 real 0m2.808s 00:06:11.507 user 0m3.261s 00:06:11.507 sys 0m0.629s 00:06:11.507 00:40:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.507 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 ************************************ 00:06:11.507 END TEST locking_app_on_locked_coremask 00:06:11.507 ************************************ 00:06:11.507 00:40:23 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:11.507 00:40:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.507 00:40:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.507 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 ************************************ 00:06:11.508 START TEST locking_overlapped_coremask 00:06:11.508 ************************************ 00:06:11.508 00:40:23 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:11.508 00:40:23 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69950 00:06:11.508 00:40:23 -- event/cpu_locks.sh@133 -- # waitforlisten 69950 /var/tmp/spdk.sock 00:06:11.508 00:40:23 -- common/autotest_common.sh@829 -- # '[' -z 69950 ']' 00:06:11.508 00:40:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.508 00:40:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.508 00:40:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.508 00:40:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.508 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.508 00:40:23 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:11.508 [2024-12-03 00:40:23.831075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.508 [2024-12-03 00:40:23.831158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69950 ] 00:06:11.508 [2024-12-03 00:40:23.962755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.766 [2024-12-03 00:40:24.028035] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.766 [2024-12-03 00:40:24.028341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.766 [2024-12-03 00:40:24.028709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.766 [2024-12-03 00:40:24.028727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.333 00:40:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.333 00:40:24 -- common/autotest_common.sh@862 -- # return 0 00:06:12.333 00:40:24 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69980 00:06:12.333 00:40:24 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.333 00:40:24 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69980 /var/tmp/spdk2.sock 00:06:12.333 00:40:24 -- common/autotest_common.sh@650 -- # local es=0 00:06:12.333 00:40:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69980 /var/tmp/spdk2.sock 00:06:12.333 00:40:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.333 00:40:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.333 00:40:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.333 00:40:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.333 00:40:24 -- common/autotest_common.sh@653 -- # waitforlisten 69980 /var/tmp/spdk2.sock 00:06:12.333 00:40:24 -- common/autotest_common.sh@829 -- # '[' -z 69980 ']' 00:06:12.333 00:40:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.333 00:40:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.333 00:40:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.333 00:40:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.333 00:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:12.592 [2024-12-03 00:40:24.854529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.592 [2024-12-03 00:40:24.854609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69980 ] 00:06:12.592 [2024-12-03 00:40:24.998744] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69950 has claimed it. 00:06:12.592 [2024-12-03 00:40:24.998929] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69980) - No such process 00:06:13.158 ERROR: process (pid: 69980) is no longer running 00:06:13.158 00:40:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.158 00:40:25 -- common/autotest_common.sh@862 -- # return 1 00:06:13.158 00:40:25 -- common/autotest_common.sh@653 -- # es=1 00:06:13.158 00:40:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.158 00:40:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.158 00:40:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.158 00:40:25 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:13.158 00:40:25 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.158 00:40:25 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.158 00:40:25 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.158 00:40:25 -- event/cpu_locks.sh@141 -- # killprocess 69950 00:06:13.158 00:40:25 -- common/autotest_common.sh@936 -- # '[' -z 69950 ']' 00:06:13.158 00:40:25 -- common/autotest_common.sh@940 -- # kill -0 69950 00:06:13.158 00:40:25 -- common/autotest_common.sh@941 -- # uname 00:06:13.158 00:40:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.158 00:40:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69950 00:06:13.158 killing process with pid 69950 00:06:13.158 00:40:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.158 00:40:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.158 00:40:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69950' 00:06:13.158 00:40:25 -- common/autotest_common.sh@955 -- # kill 69950 00:06:13.158 00:40:25 -- common/autotest_common.sh@960 -- # wait 69950 00:06:13.724 00:06:13.724 real 0m2.333s 00:06:13.724 user 0m6.579s 00:06:13.724 sys 0m0.495s 00:06:13.724 ************************************ 00:06:13.724 END TEST locking_overlapped_coremask 00:06:13.724 ************************************ 00:06:13.724 00:40:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.724 00:40:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.724 00:40:26 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.724 00:40:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.724 00:40:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.724 00:40:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.724 ************************************ 00:06:13.724 START TEST locking_overlapped_coremask_via_rpc 00:06:13.724 ************************************ 00:06:13.724 00:40:26 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:13.724 00:40:26 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70026 00:06:13.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.724 00:40:26 -- event/cpu_locks.sh@149 -- # waitforlisten 70026 /var/tmp/spdk.sock 00:06:13.724 00:40:26 -- common/autotest_common.sh@829 -- # '[' -z 70026 ']' 00:06:13.724 00:40:26 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.724 00:40:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.724 00:40:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.724 00:40:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.724 00:40:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.724 00:40:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.724 [2024-12-03 00:40:26.219040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.724 [2024-12-03 00:40:26.219142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70026 ] 00:06:13.983 [2024-12-03 00:40:26.357193] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.983 [2024-12-03 00:40:26.357822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.983 [2024-12-03 00:40:26.419853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.983 [2024-12-03 00:40:26.420816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.983 [2024-12-03 00:40:26.420893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.983 [2024-12-03 00:40:26.420896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.919 00:40:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.919 00:40:27 -- common/autotest_common.sh@862 -- # return 0 00:06:14.919 00:40:27 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.919 00:40:27 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70056 00:06:14.919 00:40:27 -- event/cpu_locks.sh@153 -- # waitforlisten 70056 /var/tmp/spdk2.sock 00:06:14.919 00:40:27 -- common/autotest_common.sh@829 -- # '[' -z 70056 ']' 00:06:14.919 00:40:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.919 00:40:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.919 00:40:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.919 00:40:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.920 00:40:27 -- common/autotest_common.sh@10 -- # set +x 00:06:14.920 [2024-12-03 00:40:27.198210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.920 [2024-12-03 00:40:27.198295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70056 ] 00:06:14.920 [2024-12-03 00:40:27.334208] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.920 [2024-12-03 00:40:27.334256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.179 [2024-12-03 00:40:27.476569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.179 [2024-12-03 00:40:27.477476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.179 [2024-12-03 00:40:27.477594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.179 [2024-12-03 00:40:27.477595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:15.754 00:40:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.754 00:40:28 -- common/autotest_common.sh@862 -- # return 0 00:06:15.754 00:40:28 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.754 00:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.754 00:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.754 00:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.754 00:40:28 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:15.754 00:40:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:15.754 00:40:28 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:15.754 00:40:28 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:15.754 00:40:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.754 00:40:28 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:15.754 00:40:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.754 00:40:28 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:15.754 00:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.754 00:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.754 [2024-12-03 00:40:28.220616] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70026 has claimed it. 00:06:15.754 2024/12/03 00:40:28 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:15.754 request: 00:06:15.754 { 00:06:15.754 "method": "framework_enable_cpumask_locks", 00:06:15.754 "params": {} 00:06:15.754 } 00:06:15.754 Got JSON-RPC error response 00:06:15.754 GoRPCClient: error on JSON-RPC call 00:06:15.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.754 00:40:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.754 00:40:28 -- common/autotest_common.sh@653 -- # es=1 00:06:15.754 00:40:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.754 00:40:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.754 00:40:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.754 00:40:28 -- event/cpu_locks.sh@158 -- # waitforlisten 70026 /var/tmp/spdk.sock 00:06:15.754 00:40:28 -- common/autotest_common.sh@829 -- # '[' -z 70026 ']' 00:06:15.754 00:40:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.754 00:40:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.754 00:40:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.754 00:40:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.754 00:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:16.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.014 00:40:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.014 00:40:28 -- common/autotest_common.sh@862 -- # return 0 00:06:16.014 00:40:28 -- event/cpu_locks.sh@159 -- # waitforlisten 70056 /var/tmp/spdk2.sock 00:06:16.014 00:40:28 -- common/autotest_common.sh@829 -- # '[' -z 70056 ']' 00:06:16.014 00:40:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.014 00:40:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.014 00:40:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.014 00:40:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.014 00:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:16.273 ************************************ 00:06:16.273 END TEST locking_overlapped_coremask_via_rpc 00:06:16.273 ************************************ 00:06:16.273 00:40:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.273 00:40:28 -- common/autotest_common.sh@862 -- # return 0 00:06:16.273 00:40:28 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:16.273 00:40:28 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.273 00:40:28 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.274 00:40:28 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.274 00:06:16.274 real 0m2.602s 00:06:16.274 user 0m1.356s 00:06:16.274 sys 0m0.181s 00:06:16.274 00:40:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.274 00:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:16.533 00:40:28 -- event/cpu_locks.sh@174 -- # cleanup 00:06:16.533 00:40:28 -- event/cpu_locks.sh@15 -- # [[ -z 70026 ]] 00:06:16.533 00:40:28 -- event/cpu_locks.sh@15 -- # killprocess 70026 00:06:16.533 00:40:28 -- common/autotest_common.sh@936 -- # '[' -z 70026 ']' 00:06:16.533 00:40:28 -- common/autotest_common.sh@940 -- # kill -0 70026 00:06:16.533 00:40:28 -- common/autotest_common.sh@941 -- # uname 00:06:16.533 00:40:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.533 00:40:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70026 00:06:16.533 killing process with pid 70026 00:06:16.533 00:40:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.533 00:40:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.533 00:40:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70026' 00:06:16.533 00:40:28 -- common/autotest_common.sh@955 -- # kill 70026 00:06:16.533 00:40:28 -- common/autotest_common.sh@960 -- # wait 70026 00:06:17.102 00:40:29 -- event/cpu_locks.sh@16 -- # [[ -z 70056 ]] 00:06:17.102 00:40:29 -- event/cpu_locks.sh@16 -- # killprocess 70056 00:06:17.102 00:40:29 -- common/autotest_common.sh@936 -- # '[' -z 70056 ']' 00:06:17.102 00:40:29 -- common/autotest_common.sh@940 -- # kill -0 70056 00:06:17.102 00:40:29 -- common/autotest_common.sh@941 -- # uname 00:06:17.102 00:40:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.102 00:40:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70056 00:06:17.102 00:40:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:17.102 00:40:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:17.102 killing process with pid 70056 00:06:17.102 00:40:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70056' 00:06:17.102 00:40:29 -- common/autotest_common.sh@955 -- # kill 70056 00:06:17.102 00:40:29 -- common/autotest_common.sh@960 -- # wait 70056 00:06:17.671 00:40:29 -- event/cpu_locks.sh@18 -- # rm -f 00:06:17.671 00:40:29 -- event/cpu_locks.sh@1 -- # cleanup 00:06:17.671 00:40:29 -- event/cpu_locks.sh@15 -- # [[ -z 70026 ]] 00:06:17.671 00:40:29 -- event/cpu_locks.sh@15 -- # killprocess 70026 00:06:17.671 00:40:29 -- common/autotest_common.sh@936 -- # '[' -z 70026 ']' 00:06:17.671 Process with pid 70026 is not found 00:06:17.671 00:40:29 -- common/autotest_common.sh@940 -- # kill -0 70026 00:06:17.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70026) - No such process 00:06:17.671 00:40:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70026 is not found' 00:06:17.671 00:40:29 -- event/cpu_locks.sh@16 -- # [[ -z 70056 ]] 00:06:17.671 00:40:29 -- event/cpu_locks.sh@16 -- # killprocess 70056 00:06:17.671 00:40:29 -- common/autotest_common.sh@936 -- # '[' -z 70056 ']' 00:06:17.671 00:40:29 -- common/autotest_common.sh@940 -- # kill -0 70056 00:06:17.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70056) - No such process 00:06:17.671 Process with pid 70056 is not found 00:06:17.671 00:40:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70056 is not found' 00:06:17.671 00:40:29 -- event/cpu_locks.sh@18 -- # rm -f 00:06:17.671 00:06:17.671 real 0m21.173s 00:06:17.671 user 0m37.594s 00:06:17.671 sys 0m5.590s 00:06:17.671 00:40:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.671 ************************************ 00:06:17.671 END TEST cpu_locks 00:06:17.671 ************************************ 00:06:17.671 00:40:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.671 ************************************ 00:06:17.671 END TEST event 00:06:17.671 ************************************ 00:06:17.671 00:06:17.671 real 0m48.291s 00:06:17.671 user 1m32.389s 00:06:17.671 sys 0m9.133s 00:06:17.671 00:40:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.671 00:40:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.671 00:40:30 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:17.671 00:40:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.671 00:40:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.671 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:06:17.671 ************************************ 00:06:17.671 START TEST thread 00:06:17.671 ************************************ 00:06:17.671 00:40:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:17.671 * Looking for test storage... 00:06:17.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:17.671 00:40:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.671 00:40:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.671 00:40:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.931 00:40:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.931 00:40:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.931 00:40:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.931 00:40:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.931 00:40:30 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.931 00:40:30 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.931 00:40:30 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.931 00:40:30 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.931 00:40:30 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.931 00:40:30 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.931 00:40:30 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.931 00:40:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.931 00:40:30 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.931 00:40:30 -- scripts/common.sh@344 -- # : 1 00:06:17.931 00:40:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.931 00:40:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.931 00:40:30 -- scripts/common.sh@364 -- # decimal 1 00:06:17.931 00:40:30 -- scripts/common.sh@352 -- # local d=1 00:06:17.931 00:40:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.931 00:40:30 -- scripts/common.sh@354 -- # echo 1 00:06:17.931 00:40:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.931 00:40:30 -- scripts/common.sh@365 -- # decimal 2 00:06:17.931 00:40:30 -- scripts/common.sh@352 -- # local d=2 00:06:17.931 00:40:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.931 00:40:30 -- scripts/common.sh@354 -- # echo 2 00:06:17.931 00:40:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.931 00:40:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.931 00:40:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.931 00:40:30 -- scripts/common.sh@367 -- # return 0 00:06:17.931 00:40:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.931 00:40:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.931 --rc genhtml_branch_coverage=1 00:06:17.931 --rc genhtml_function_coverage=1 00:06:17.931 --rc genhtml_legend=1 00:06:17.931 --rc geninfo_all_blocks=1 00:06:17.931 --rc geninfo_unexecuted_blocks=1 00:06:17.931 00:06:17.931 ' 00:06:17.931 00:40:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.931 --rc genhtml_branch_coverage=1 00:06:17.931 --rc genhtml_function_coverage=1 00:06:17.931 --rc genhtml_legend=1 00:06:17.931 --rc geninfo_all_blocks=1 00:06:17.931 --rc geninfo_unexecuted_blocks=1 00:06:17.931 00:06:17.931 ' 00:06:17.931 00:40:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.931 --rc genhtml_branch_coverage=1 00:06:17.931 --rc genhtml_function_coverage=1 00:06:17.931 --rc genhtml_legend=1 00:06:17.931 --rc geninfo_all_blocks=1 00:06:17.931 --rc geninfo_unexecuted_blocks=1 00:06:17.931 00:06:17.931 ' 00:06:17.931 00:40:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.931 --rc genhtml_branch_coverage=1 00:06:17.931 --rc genhtml_function_coverage=1 00:06:17.931 --rc genhtml_legend=1 00:06:17.931 --rc geninfo_all_blocks=1 00:06:17.931 --rc geninfo_unexecuted_blocks=1 00:06:17.931 00:06:17.931 ' 00:06:17.931 00:40:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:17.931 00:40:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:17.931 00:40:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.931 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:06:17.931 ************************************ 00:06:17.931 START TEST thread_poller_perf 00:06:17.931 ************************************ 00:06:17.931 00:40:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:17.931 [2024-12-03 00:40:30.241123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.931 [2024-12-03 00:40:30.241329] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70215 ] 00:06:17.931 [2024-12-03 00:40:30.373058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.191 [2024-12-03 00:40:30.449327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.191 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:19.129 [2024-12-03T00:40:31.644Z] ====================================== 00:06:19.129 [2024-12-03T00:40:31.644Z] busy:2207358608 (cyc) 00:06:19.129 [2024-12-03T00:40:31.644Z] total_run_count: 389000 00:06:19.129 [2024-12-03T00:40:31.644Z] tsc_hz: 2200000000 (cyc) 00:06:19.129 [2024-12-03T00:40:31.644Z] ====================================== 00:06:19.129 [2024-12-03T00:40:31.644Z] poller_cost: 5674 (cyc), 2579 (nsec) 00:06:19.129 00:06:19.129 real 0m1.304s 00:06:19.129 user 0m1.125s 00:06:19.129 sys 0m0.069s 00:06:19.129 ************************************ 00:06:19.129 END TEST thread_poller_perf 00:06:19.129 ************************************ 00:06:19.129 00:40:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.129 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:19.129 00:40:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.129 00:40:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:19.129 00:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.129 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:19.129 ************************************ 00:06:19.129 START TEST thread_poller_perf 00:06:19.129 ************************************ 00:06:19.129 00:40:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.129 [2024-12-03 00:40:31.599635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.129 [2024-12-03 00:40:31.599758] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70245 ] 00:06:19.388 [2024-12-03 00:40:31.727348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.388 [2024-12-03 00:40:31.797764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.388 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:20.764 [2024-12-03T00:40:33.279Z] ====================================== 00:06:20.764 [2024-12-03T00:40:33.279Z] busy:2202946900 (cyc) 00:06:20.764 [2024-12-03T00:40:33.279Z] total_run_count: 5338000 00:06:20.764 [2024-12-03T00:40:33.279Z] tsc_hz: 2200000000 (cyc) 00:06:20.764 [2024-12-03T00:40:33.279Z] ====================================== 00:06:20.764 [2024-12-03T00:40:33.279Z] poller_cost: 412 (cyc), 187 (nsec) 00:06:20.764 00:06:20.764 real 0m1.313s 00:06:20.764 user 0m1.136s 00:06:20.764 sys 0m0.068s 00:06:20.764 00:40:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.764 ************************************ 00:06:20.764 END TEST thread_poller_perf 00:06:20.764 ************************************ 00:06:20.764 00:40:32 -- common/autotest_common.sh@10 -- # set +x 00:06:20.764 00:40:32 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:20.764 00:06:20.765 real 0m2.918s 00:06:20.765 user 0m2.411s 00:06:20.765 sys 0m0.287s 00:06:20.765 ************************************ 00:06:20.765 END TEST thread 00:06:20.765 ************************************ 00:06:20.765 00:40:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.765 00:40:32 -- common/autotest_common.sh@10 -- # set +x 00:06:20.765 00:40:32 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:20.765 00:40:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.765 00:40:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.765 00:40:32 -- common/autotest_common.sh@10 -- # set +x 00:06:20.765 ************************************ 00:06:20.765 START TEST accel 00:06:20.765 ************************************ 00:06:20.765 00:40:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:20.765 * Looking for test storage... 00:06:20.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:20.765 00:40:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:20.765 00:40:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:20.765 00:40:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:20.765 00:40:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:20.765 00:40:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:20.765 00:40:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:20.765 00:40:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:20.765 00:40:33 -- scripts/common.sh@335 -- # IFS=.-: 00:06:20.765 00:40:33 -- scripts/common.sh@335 -- # read -ra ver1 00:06:20.765 00:40:33 -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.765 00:40:33 -- scripts/common.sh@336 -- # read -ra ver2 00:06:20.765 00:40:33 -- scripts/common.sh@337 -- # local 'op=<' 00:06:20.765 00:40:33 -- scripts/common.sh@339 -- # ver1_l=2 00:06:20.765 00:40:33 -- scripts/common.sh@340 -- # ver2_l=1 00:06:20.765 00:40:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:20.765 00:40:33 -- scripts/common.sh@343 -- # case "$op" in 00:06:20.765 00:40:33 -- scripts/common.sh@344 -- # : 1 00:06:20.765 00:40:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:20.765 00:40:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.765 00:40:33 -- scripts/common.sh@364 -- # decimal 1 00:06:20.765 00:40:33 -- scripts/common.sh@352 -- # local d=1 00:06:20.765 00:40:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.765 00:40:33 -- scripts/common.sh@354 -- # echo 1 00:06:20.765 00:40:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:20.765 00:40:33 -- scripts/common.sh@365 -- # decimal 2 00:06:20.765 00:40:33 -- scripts/common.sh@352 -- # local d=2 00:06:20.765 00:40:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.765 00:40:33 -- scripts/common.sh@354 -- # echo 2 00:06:20.765 00:40:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:20.765 00:40:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:20.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.765 00:40:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:20.765 00:40:33 -- scripts/common.sh@367 -- # return 0 00:06:20.765 00:40:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.765 00:40:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:20.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.765 --rc genhtml_branch_coverage=1 00:06:20.765 --rc genhtml_function_coverage=1 00:06:20.765 --rc genhtml_legend=1 00:06:20.765 --rc geninfo_all_blocks=1 00:06:20.765 --rc geninfo_unexecuted_blocks=1 00:06:20.765 00:06:20.765 ' 00:06:20.765 00:40:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:20.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.765 --rc genhtml_branch_coverage=1 00:06:20.765 --rc genhtml_function_coverage=1 00:06:20.765 --rc genhtml_legend=1 00:06:20.765 --rc geninfo_all_blocks=1 00:06:20.765 --rc geninfo_unexecuted_blocks=1 00:06:20.765 00:06:20.765 ' 00:06:20.765 00:40:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:20.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.765 --rc genhtml_branch_coverage=1 00:06:20.765 --rc genhtml_function_coverage=1 00:06:20.765 --rc genhtml_legend=1 00:06:20.765 --rc geninfo_all_blocks=1 00:06:20.765 --rc geninfo_unexecuted_blocks=1 00:06:20.765 00:06:20.765 ' 00:06:20.765 00:40:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:20.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.765 --rc genhtml_branch_coverage=1 00:06:20.765 --rc genhtml_function_coverage=1 00:06:20.765 --rc genhtml_legend=1 00:06:20.765 --rc geninfo_all_blocks=1 00:06:20.765 --rc geninfo_unexecuted_blocks=1 00:06:20.765 00:06:20.765 ' 00:06:20.765 00:40:33 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:20.765 00:40:33 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:20.765 00:40:33 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.765 00:40:33 -- accel/accel.sh@59 -- # spdk_tgt_pid=70334 00:06:20.765 00:40:33 -- accel/accel.sh@60 -- # waitforlisten 70334 00:06:20.765 00:40:33 -- common/autotest_common.sh@829 -- # '[' -z 70334 ']' 00:06:20.765 00:40:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.765 00:40:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.765 00:40:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.765 00:40:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.765 00:40:33 -- common/autotest_common.sh@10 -- # set +x 00:06:20.765 00:40:33 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:20.765 00:40:33 -- accel/accel.sh@58 -- # build_accel_config 00:06:20.765 00:40:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.765 00:40:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.765 00:40:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.765 00:40:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.765 00:40:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.765 00:40:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.765 00:40:33 -- accel/accel.sh@42 -- # jq -r . 00:06:20.765 [2024-12-03 00:40:33.250632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.765 [2024-12-03 00:40:33.250978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70334 ] 00:06:21.030 [2024-12-03 00:40:33.388781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.030 [2024-12-03 00:40:33.445389] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.030 [2024-12-03 00:40:33.445838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.966 00:40:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.966 00:40:34 -- common/autotest_common.sh@862 -- # return 0 00:06:21.966 00:40:34 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:21.966 00:40:34 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:21.966 00:40:34 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:21.967 00:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.967 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:21.967 00:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # IFS== 00:06:21.967 00:40:34 -- accel/accel.sh@64 -- # read -r opc module 00:06:21.967 00:40:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:21.967 00:40:34 -- accel/accel.sh@67 -- # killprocess 70334 00:06:21.967 00:40:34 -- common/autotest_common.sh@936 -- # '[' -z 70334 ']' 00:06:21.967 00:40:34 -- common/autotest_common.sh@940 -- # kill -0 70334 00:06:21.967 00:40:34 -- common/autotest_common.sh@941 -- # uname 00:06:21.967 00:40:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.967 00:40:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70334 00:06:21.967 00:40:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.967 00:40:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.967 00:40:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70334' 00:06:21.967 killing process with pid 70334 00:06:21.967 00:40:34 -- common/autotest_common.sh@955 -- # kill 70334 00:06:21.967 00:40:34 -- common/autotest_common.sh@960 -- # wait 70334 00:06:22.225 00:40:34 -- accel/accel.sh@68 -- # trap - ERR 00:06:22.225 00:40:34 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:22.225 00:40:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:22.225 00:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.225 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 00:40:34 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:22.225 00:40:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:22.225 00:40:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.225 00:40:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.225 00:40:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.225 00:40:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.225 00:40:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.225 00:40:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.225 00:40:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.225 00:40:34 -- accel/accel.sh@42 -- # jq -r . 00:06:22.225 00:40:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.225 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 00:40:34 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:22.225 00:40:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.225 00:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.225 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 ************************************ 00:06:22.225 START TEST accel_missing_filename 00:06:22.225 ************************************ 00:06:22.225 00:40:34 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:22.225 00:40:34 -- common/autotest_common.sh@650 -- # local es=0 00:06:22.225 00:40:34 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:22.225 00:40:34 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:22.225 00:40:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.225 00:40:34 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:22.225 00:40:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.225 00:40:34 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:22.225 00:40:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:22.225 00:40:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.225 00:40:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.225 00:40:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.225 00:40:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.225 00:40:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.225 00:40:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.225 00:40:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.225 00:40:34 -- accel/accel.sh@42 -- # jq -r . 00:06:22.483 [2024-12-03 00:40:34.746869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.483 [2024-12-03 00:40:34.746964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70398 ] 00:06:22.483 [2024-12-03 00:40:34.884103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.483 [2024-12-03 00:40:34.938006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.483 [2024-12-03 00:40:34.989863] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.742 [2024-12-03 00:40:35.061756] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:22.742 A filename is required. 00:06:22.742 00:40:35 -- common/autotest_common.sh@653 -- # es=234 00:06:22.742 00:40:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.742 00:40:35 -- common/autotest_common.sh@662 -- # es=106 00:06:22.742 ************************************ 00:06:22.742 END TEST accel_missing_filename 00:06:22.742 ************************************ 00:06:22.742 00:40:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:22.742 00:40:35 -- common/autotest_common.sh@670 -- # es=1 00:06:22.742 00:40:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.742 00:06:22.742 real 0m0.414s 00:06:22.742 user 0m0.246s 00:06:22.742 sys 0m0.117s 00:06:22.742 00:40:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.742 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.742 00:40:35 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.742 00:40:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:22.742 00:40:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.742 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.742 ************************************ 00:06:22.742 START TEST accel_compress_verify 00:06:22.742 ************************************ 00:06:22.742 00:40:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.742 00:40:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:22.742 00:40:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.742 00:40:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:22.742 00:40:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.742 00:40:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:22.742 00:40:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.742 00:40:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.742 00:40:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.742 00:40:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.742 00:40:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.742 00:40:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.742 00:40:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.742 00:40:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.742 00:40:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.742 00:40:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.742 00:40:35 -- accel/accel.sh@42 -- # jq -r . 00:06:22.742 [2024-12-03 00:40:35.216981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.743 [2024-12-03 00:40:35.217229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70428 ] 00:06:23.001 [2024-12-03 00:40:35.355446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.001 [2024-12-03 00:40:35.416648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.001 [2024-12-03 00:40:35.471203] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.261 [2024-12-03 00:40:35.543217] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:23.261 00:06:23.261 Compression does not support the verify option, aborting. 00:06:23.261 00:40:35 -- common/autotest_common.sh@653 -- # es=161 00:06:23.261 00:40:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.261 00:40:35 -- common/autotest_common.sh@662 -- # es=33 00:06:23.261 00:40:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:23.261 00:40:35 -- common/autotest_common.sh@670 -- # es=1 00:06:23.261 00:40:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.261 00:06:23.261 real 0m0.427s 00:06:23.261 user 0m0.259s 00:06:23.261 sys 0m0.115s 00:06:23.261 ************************************ 00:06:23.261 END TEST accel_compress_verify 00:06:23.261 ************************************ 00:06:23.261 00:40:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.261 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.261 00:40:35 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:23.261 00:40:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:23.261 00:40:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.261 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.261 ************************************ 00:06:23.261 START TEST accel_wrong_workload 00:06:23.261 ************************************ 00:06:23.261 00:40:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:23.261 00:40:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:23.261 00:40:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:23.261 00:40:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:23.261 00:40:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.261 00:40:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:23.261 00:40:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.261 00:40:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:23.261 00:40:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:23.261 00:40:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.261 00:40:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.261 00:40:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.261 00:40:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.261 00:40:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.261 00:40:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.261 00:40:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.261 00:40:35 -- accel/accel.sh@42 -- # jq -r . 00:06:23.261 Unsupported workload type: foobar 00:06:23.261 [2024-12-03 00:40:35.694524] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:23.261 accel_perf options: 00:06:23.261 [-h help message] 00:06:23.261 [-q queue depth per core] 00:06:23.261 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:23.261 [-T number of threads per core 00:06:23.261 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:23.261 [-t time in seconds] 00:06:23.261 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:23.261 [ dif_verify, , dif_generate, dif_generate_copy 00:06:23.261 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:23.261 [-l for compress/decompress workloads, name of uncompressed input file 00:06:23.261 [-S for crc32c workload, use this seed value (default 0) 00:06:23.261 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:23.261 [-f for fill workload, use this BYTE value (default 255) 00:06:23.261 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:23.261 [-y verify result if this switch is on] 00:06:23.261 [-a tasks to allocate per core (default: same value as -q)] 00:06:23.261 Can be used to spread operations across a wider range of memory. 00:06:23.261 00:40:35 -- common/autotest_common.sh@653 -- # es=1 00:06:23.261 00:40:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.261 00:40:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.261 00:40:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.261 ************************************ 00:06:23.261 END TEST accel_wrong_workload 00:06:23.261 ************************************ 00:06:23.261 00:06:23.261 real 0m0.030s 00:06:23.261 user 0m0.014s 00:06:23.261 sys 0m0.015s 00:06:23.261 00:40:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.261 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.261 00:40:35 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:23.261 00:40:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:23.261 00:40:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.261 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.261 ************************************ 00:06:23.261 START TEST accel_negative_buffers 00:06:23.261 ************************************ 00:06:23.261 00:40:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:23.261 00:40:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:23.261 00:40:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:23.261 00:40:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:23.261 00:40:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.261 00:40:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:23.261 00:40:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.261 00:40:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:23.261 00:40:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:23.261 00:40:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.261 00:40:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.261 00:40:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.261 00:40:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.261 00:40:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.261 00:40:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.261 00:40:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.261 00:40:35 -- accel/accel.sh@42 -- # jq -r . 00:06:23.521 -x option must be non-negative. 00:06:23.521 [2024-12-03 00:40:35.779350] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:23.521 accel_perf options: 00:06:23.521 [-h help message] 00:06:23.521 [-q queue depth per core] 00:06:23.521 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:23.521 [-T number of threads per core 00:06:23.521 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:23.521 [-t time in seconds] 00:06:23.521 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:23.521 [ dif_verify, , dif_generate, dif_generate_copy 00:06:23.521 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:23.521 [-l for compress/decompress workloads, name of uncompressed input file 00:06:23.521 [-S for crc32c workload, use this seed value (default 0) 00:06:23.521 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:23.521 [-f for fill workload, use this BYTE value (default 255) 00:06:23.521 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:23.521 [-y verify result if this switch is on] 00:06:23.521 [-a tasks to allocate per core (default: same value as -q)] 00:06:23.521 Can be used to spread operations across a wider range of memory. 00:06:23.521 00:40:35 -- common/autotest_common.sh@653 -- # es=1 00:06:23.521 00:40:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.521 00:40:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.521 00:40:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.521 00:06:23.521 real 0m0.031s 00:06:23.521 user 0m0.018s 00:06:23.521 sys 0m0.012s 00:06:23.521 00:40:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.521 ************************************ 00:06:23.521 END TEST accel_negative_buffers 00:06:23.521 ************************************ 00:06:23.521 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.521 00:40:35 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:23.521 00:40:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.521 00:40:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.521 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.521 ************************************ 00:06:23.521 START TEST accel_crc32c 00:06:23.521 ************************************ 00:06:23.521 00:40:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:23.521 00:40:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.521 00:40:35 -- accel/accel.sh@17 -- # local accel_module 00:06:23.521 00:40:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:23.521 00:40:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:23.521 00:40:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.521 00:40:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.521 00:40:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.521 00:40:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.521 00:40:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.521 00:40:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.521 00:40:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.521 00:40:35 -- accel/accel.sh@42 -- # jq -r . 00:06:23.521 [2024-12-03 00:40:35.852825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.521 [2024-12-03 00:40:35.853053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70491 ] 00:06:23.521 [2024-12-03 00:40:35.991980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.788 [2024-12-03 00:40:36.054478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.758 00:40:37 -- accel/accel.sh@18 -- # out=' 00:06:24.758 SPDK Configuration: 00:06:24.758 Core mask: 0x1 00:06:24.758 00:06:24.758 Accel Perf Configuration: 00:06:24.758 Workload Type: crc32c 00:06:24.758 CRC-32C seed: 32 00:06:24.758 Transfer size: 4096 bytes 00:06:24.758 Vector count 1 00:06:24.758 Module: software 00:06:24.758 Queue depth: 32 00:06:24.758 Allocate depth: 32 00:06:24.758 # threads/core: 1 00:06:24.758 Run time: 1 seconds 00:06:24.758 Verify: Yes 00:06:24.758 00:06:24.758 Running for 1 seconds... 00:06:24.758 00:06:24.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.758 ------------------------------------------------------------------------------------ 00:06:24.758 0,0 556640/s 2174 MiB/s 0 0 00:06:24.758 ==================================================================================== 00:06:24.758 Total 556640/s 2174 MiB/s 0 0' 00:06:24.758 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.758 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.758 00:40:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:24.758 00:40:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:24.758 00:40:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.758 00:40:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.758 00:40:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.758 00:40:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.758 00:40:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.758 00:40:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.758 00:40:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.758 00:40:37 -- accel/accel.sh@42 -- # jq -r . 00:06:25.018 [2024-12-03 00:40:37.280347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.018 [2024-12-03 00:40:37.280641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70506 ] 00:06:25.018 [2024-12-03 00:40:37.419644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.018 [2024-12-03 00:40:37.477084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=0x1 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=crc32c 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=32 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=software 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=32 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=32 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=1 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val=Yes 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 00:40:37 -- accel/accel.sh@21 -- # val= 00:06:25.277 00:40:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 00:40:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@21 -- # val= 00:06:26.214 00:40:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@21 -- # val= 00:06:26.214 00:40:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@21 -- # val= 00:06:26.214 00:40:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@21 -- # val= 00:06:26.214 ************************************ 00:06:26.214 END TEST accel_crc32c 00:06:26.214 ************************************ 00:06:26.214 00:40:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@21 -- # val= 00:06:26.214 00:40:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@21 -- # val= 00:06:26.214 00:40:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.214 00:40:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.214 00:40:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.215 00:40:38 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:26.215 00:40:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.215 00:06:26.215 real 0m2.828s 00:06:26.215 user 0m2.396s 00:06:26.215 sys 0m0.232s 00:06:26.215 00:40:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.215 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:06:26.215 00:40:38 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:26.215 00:40:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:26.215 00:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.215 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:06:26.215 ************************************ 00:06:26.215 START TEST accel_crc32c_C2 00:06:26.215 ************************************ 00:06:26.215 00:40:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:26.215 00:40:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.215 00:40:38 -- accel/accel.sh@17 -- # local accel_module 00:06:26.215 00:40:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:26.215 00:40:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:26.215 00:40:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.215 00:40:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.215 00:40:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.215 00:40:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.215 00:40:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.215 00:40:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.215 00:40:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.215 00:40:38 -- accel/accel.sh@42 -- # jq -r . 00:06:26.474 [2024-12-03 00:40:38.741450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.474 [2024-12-03 00:40:38.742049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70535 ] 00:06:26.474 [2024-12-03 00:40:38.880309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.474 [2024-12-03 00:40:38.939028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.852 00:40:40 -- accel/accel.sh@18 -- # out=' 00:06:27.852 SPDK Configuration: 00:06:27.852 Core mask: 0x1 00:06:27.852 00:06:27.852 Accel Perf Configuration: 00:06:27.852 Workload Type: crc32c 00:06:27.852 CRC-32C seed: 0 00:06:27.852 Transfer size: 4096 bytes 00:06:27.852 Vector count 2 00:06:27.852 Module: software 00:06:27.852 Queue depth: 32 00:06:27.852 Allocate depth: 32 00:06:27.852 # threads/core: 1 00:06:27.852 Run time: 1 seconds 00:06:27.852 Verify: Yes 00:06:27.852 00:06:27.852 Running for 1 seconds... 00:06:27.852 00:06:27.852 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.852 ------------------------------------------------------------------------------------ 00:06:27.852 0,0 434624/s 3395 MiB/s 0 0 00:06:27.852 ==================================================================================== 00:06:27.852 Total 434624/s 1697 MiB/s 0 0' 00:06:27.852 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.852 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.852 00:40:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:27.852 00:40:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:27.852 00:40:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.852 00:40:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.852 00:40:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.852 00:40:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.852 00:40:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.852 00:40:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.852 00:40:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.852 00:40:40 -- accel/accel.sh@42 -- # jq -r . 00:06:27.852 [2024-12-03 00:40:40.146470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.852 [2024-12-03 00:40:40.146545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:06:27.852 [2024-12-03 00:40:40.274449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.852 [2024-12-03 00:40:40.325683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=0x1 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=crc32c 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=0 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=software 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=32 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=32 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=1 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val=Yes 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.112 00:40:40 -- accel/accel.sh@21 -- # val= 00:06:28.112 00:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # IFS=: 00:06:28.112 00:40:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@21 -- # val= 00:06:29.049 00:40:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # IFS=: 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@21 -- # val= 00:06:29.049 00:40:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # IFS=: 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@21 -- # val= 00:06:29.049 00:40:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # IFS=: 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@21 -- # val= 00:06:29.049 00:40:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # IFS=: 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@21 -- # val= 00:06:29.049 00:40:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # IFS=: 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@21 -- # val= 00:06:29.049 00:40:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # IFS=: 00:06:29.049 00:40:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.049 00:40:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.049 00:40:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:29.049 00:40:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.049 00:06:29.049 real 0m2.809s 00:06:29.049 user 0m2.382s 00:06:29.049 sys 0m0.225s 00:06:29.049 00:40:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.049 00:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 ************************************ 00:06:29.049 END TEST accel_crc32c_C2 00:06:29.049 ************************************ 00:06:29.308 00:40:41 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:29.308 00:40:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.308 00:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.308 00:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.308 ************************************ 00:06:29.308 START TEST accel_copy 00:06:29.308 ************************************ 00:06:29.308 00:40:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:29.308 00:40:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.308 00:40:41 -- accel/accel.sh@17 -- # local accel_module 00:06:29.308 00:40:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:29.308 00:40:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:29.308 00:40:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.308 00:40:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.308 00:40:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.308 00:40:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.308 00:40:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.308 00:40:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.308 00:40:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.308 00:40:41 -- accel/accel.sh@42 -- # jq -r . 00:06:29.308 [2024-12-03 00:40:41.610972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.308 [2024-12-03 00:40:41.611075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70589 ] 00:06:29.308 [2024-12-03 00:40:41.747952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.308 [2024-12-03 00:40:41.805598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.684 00:40:42 -- accel/accel.sh@18 -- # out=' 00:06:30.684 SPDK Configuration: 00:06:30.684 Core mask: 0x1 00:06:30.684 00:06:30.684 Accel Perf Configuration: 00:06:30.684 Workload Type: copy 00:06:30.684 Transfer size: 4096 bytes 00:06:30.684 Vector count 1 00:06:30.684 Module: software 00:06:30.684 Queue depth: 32 00:06:30.684 Allocate depth: 32 00:06:30.684 # threads/core: 1 00:06:30.684 Run time: 1 seconds 00:06:30.684 Verify: Yes 00:06:30.684 00:06:30.684 Running for 1 seconds... 00:06:30.684 00:06:30.684 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.684 ------------------------------------------------------------------------------------ 00:06:30.684 0,0 384640/s 1502 MiB/s 0 0 00:06:30.684 ==================================================================================== 00:06:30.684 Total 384640/s 1502 MiB/s 0 0' 00:06:30.684 00:40:42 -- accel/accel.sh@20 -- # IFS=: 00:06:30.684 00:40:42 -- accel/accel.sh@20 -- # read -r var val 00:06:30.684 00:40:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:30.684 00:40:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:30.684 00:40:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.684 00:40:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.684 00:40:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.684 00:40:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.684 00:40:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.684 00:40:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.684 00:40:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.684 00:40:42 -- accel/accel.sh@42 -- # jq -r . 00:06:30.684 [2024-12-03 00:40:43.008403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.684 [2024-12-03 00:40:43.008506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70609 ] 00:06:30.684 [2024-12-03 00:40:43.132354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.684 [2024-12-03 00:40:43.194309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=0x1 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=copy 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=software 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=32 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=32 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=1 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val=Yes 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.943 00:40:43 -- accel/accel.sh@21 -- # val= 00:06:30.943 00:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.943 00:40:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@21 -- # val= 00:06:31.878 00:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@21 -- # val= 00:06:31.878 00:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@21 -- # val= 00:06:31.878 00:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@21 -- # val= 00:06:31.878 00:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@21 -- # val= 00:06:31.878 00:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@21 -- # val= 00:06:31.878 00:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.878 00:40:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.878 00:40:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.878 00:40:44 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:31.878 00:40:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.878 00:06:31.878 real 0m2.790s 00:06:31.878 user 0m2.382s 00:06:31.878 sys 0m0.207s 00:06:31.878 00:40:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.878 ************************************ 00:06:31.878 END TEST accel_copy 00:06:31.878 ************************************ 00:06:31.878 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:32.138 00:40:44 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.138 00:40:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:32.138 00:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.138 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:32.138 ************************************ 00:06:32.138 START TEST accel_fill 00:06:32.138 ************************************ 00:06:32.138 00:40:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.138 00:40:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.138 00:40:44 -- accel/accel.sh@17 -- # local accel_module 00:06:32.138 00:40:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.138 00:40:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.138 00:40:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.138 00:40:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.138 00:40:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.138 00:40:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.138 00:40:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.138 00:40:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.138 00:40:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.138 00:40:44 -- accel/accel.sh@42 -- # jq -r . 00:06:32.138 [2024-12-03 00:40:44.457467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.138 [2024-12-03 00:40:44.458080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70643 ] 00:06:32.138 [2024-12-03 00:40:44.596366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.398 [2024-12-03 00:40:44.655484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.339 00:40:45 -- accel/accel.sh@18 -- # out=' 00:06:33.339 SPDK Configuration: 00:06:33.339 Core mask: 0x1 00:06:33.339 00:06:33.339 Accel Perf Configuration: 00:06:33.339 Workload Type: fill 00:06:33.339 Fill pattern: 0x80 00:06:33.339 Transfer size: 4096 bytes 00:06:33.339 Vector count 1 00:06:33.339 Module: software 00:06:33.339 Queue depth: 64 00:06:33.339 Allocate depth: 64 00:06:33.339 # threads/core: 1 00:06:33.339 Run time: 1 seconds 00:06:33.339 Verify: Yes 00:06:33.339 00:06:33.339 Running for 1 seconds... 00:06:33.339 00:06:33.339 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.339 ------------------------------------------------------------------------------------ 00:06:33.339 0,0 566400/s 2212 MiB/s 0 0 00:06:33.339 ==================================================================================== 00:06:33.339 Total 566400/s 2212 MiB/s 0 0' 00:06:33.339 00:40:45 -- accel/accel.sh@20 -- # IFS=: 00:06:33.339 00:40:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:33.339 00:40:45 -- accel/accel.sh@20 -- # read -r var val 00:06:33.339 00:40:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:33.339 00:40:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.339 00:40:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.339 00:40:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.339 00:40:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.339 00:40:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.339 00:40:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.339 00:40:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.339 00:40:45 -- accel/accel.sh@42 -- # jq -r . 00:06:33.597 [2024-12-03 00:40:45.862353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.597 [2024-12-03 00:40:45.862457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70657 ] 00:06:33.597 [2024-12-03 00:40:45.995951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.597 [2024-12-03 00:40:46.058488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.597 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.597 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.597 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.597 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.597 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.597 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.597 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.597 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.597 00:40:46 -- accel/accel.sh@21 -- # val=0x1 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=fill 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=0x80 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=software 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=64 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=64 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=1 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val=Yes 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.856 00:40:46 -- accel/accel.sh@21 -- # val= 00:06:33.856 00:40:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.856 00:40:46 -- accel/accel.sh@20 -- # read -r var val 00:06:34.790 00:40:47 -- accel/accel.sh@21 -- # val= 00:06:34.791 00:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.791 00:40:47 -- accel/accel.sh@21 -- # val= 00:06:34.791 00:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.791 00:40:47 -- accel/accel.sh@21 -- # val= 00:06:34.791 00:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.791 00:40:47 -- accel/accel.sh@21 -- # val= 00:06:34.791 00:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.791 00:40:47 -- accel/accel.sh@21 -- # val= 00:06:34.791 00:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.791 00:40:47 -- accel/accel.sh@21 -- # val= 00:06:34.791 00:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.791 00:40:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.791 00:40:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.791 00:40:47 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:34.791 00:40:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.791 00:06:34.791 real 0m2.808s 00:06:34.791 user 0m2.385s 00:06:34.791 sys 0m0.224s 00:06:34.791 00:40:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.791 ************************************ 00:06:34.791 END TEST accel_fill 00:06:34.791 ************************************ 00:06:34.791 00:40:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 00:40:47 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:34.791 00:40:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:34.791 00:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.791 00:40:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 START TEST accel_copy_crc32c 00:06:34.791 ************************************ 00:06:34.791 00:40:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:34.791 00:40:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.791 00:40:47 -- accel/accel.sh@17 -- # local accel_module 00:06:34.791 00:40:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:34.791 00:40:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:34.791 00:40:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.791 00:40:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.791 00:40:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.791 00:40:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.791 00:40:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.791 00:40:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.791 00:40:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.791 00:40:47 -- accel/accel.sh@42 -- # jq -r . 00:06:35.049 [2024-12-03 00:40:47.320725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.049 [2024-12-03 00:40:47.320842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70694 ] 00:06:35.049 [2024-12-03 00:40:47.457135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.049 [2024-12-03 00:40:47.509579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.425 00:40:48 -- accel/accel.sh@18 -- # out=' 00:06:36.425 SPDK Configuration: 00:06:36.425 Core mask: 0x1 00:06:36.425 00:06:36.425 Accel Perf Configuration: 00:06:36.425 Workload Type: copy_crc32c 00:06:36.425 CRC-32C seed: 0 00:06:36.425 Vector size: 4096 bytes 00:06:36.425 Transfer size: 4096 bytes 00:06:36.425 Vector count 1 00:06:36.425 Module: software 00:06:36.425 Queue depth: 32 00:06:36.425 Allocate depth: 32 00:06:36.425 # threads/core: 1 00:06:36.425 Run time: 1 seconds 00:06:36.425 Verify: Yes 00:06:36.425 00:06:36.425 Running for 1 seconds... 00:06:36.425 00:06:36.425 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.425 ------------------------------------------------------------------------------------ 00:06:36.425 0,0 308320/s 1204 MiB/s 0 0 00:06:36.425 ==================================================================================== 00:06:36.425 Total 308320/s 1204 MiB/s 0 0' 00:06:36.425 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.425 00:40:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.425 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.425 00:40:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:36.425 00:40:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.425 00:40:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.425 00:40:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.425 00:40:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.425 00:40:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.425 00:40:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.425 00:40:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.425 00:40:48 -- accel/accel.sh@42 -- # jq -r . 00:06:36.425 [2024-12-03 00:40:48.713975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.425 [2024-12-03 00:40:48.714068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70713 ] 00:06:36.425 [2024-12-03 00:40:48.850018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.426 [2024-12-03 00:40:48.909237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val=0x1 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val=0 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val=software 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val=32 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.686 00:40:48 -- accel/accel.sh@21 -- # val=32 00:06:36.686 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.686 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.687 00:40:48 -- accel/accel.sh@21 -- # val=1 00:06:36.687 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.687 00:40:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.687 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.687 00:40:48 -- accel/accel.sh@21 -- # val=Yes 00:06:36.687 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.687 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.687 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:36.687 00:40:48 -- accel/accel.sh@21 -- # val= 00:06:36.687 00:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # IFS=: 00:06:36.687 00:40:48 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@21 -- # val= 00:06:37.625 00:40:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@21 -- # val= 00:06:37.625 00:40:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@21 -- # val= 00:06:37.625 00:40:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@21 -- # val= 00:06:37.625 00:40:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@21 -- # val= 00:06:37.625 00:40:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@21 -- # val= 00:06:37.625 00:40:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.625 00:40:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.625 00:40:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.625 00:40:50 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:37.625 00:40:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.625 00:06:37.625 real 0m2.805s 00:06:37.625 user 0m2.385s 00:06:37.625 sys 0m0.223s 00:06:37.625 00:40:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.625 ************************************ 00:06:37.625 END TEST accel_copy_crc32c 00:06:37.625 ************************************ 00:06:37.625 00:40:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.625 00:40:50 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.625 00:40:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:37.625 00:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.625 00:40:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.884 ************************************ 00:06:37.884 START TEST accel_copy_crc32c_C2 00:06:37.884 ************************************ 00:06:37.884 00:40:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.884 00:40:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.884 00:40:50 -- accel/accel.sh@17 -- # local accel_module 00:06:37.884 00:40:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.884 00:40:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.884 00:40:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.884 00:40:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.885 00:40:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.885 00:40:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.885 00:40:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.885 00:40:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.885 00:40:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.885 00:40:50 -- accel/accel.sh@42 -- # jq -r . 00:06:37.885 [2024-12-03 00:40:50.175513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.885 [2024-12-03 00:40:50.175612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70748 ] 00:06:37.885 [2024-12-03 00:40:50.311890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.885 [2024-12-03 00:40:50.370168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.263 00:40:51 -- accel/accel.sh@18 -- # out=' 00:06:39.263 SPDK Configuration: 00:06:39.263 Core mask: 0x1 00:06:39.263 00:06:39.263 Accel Perf Configuration: 00:06:39.263 Workload Type: copy_crc32c 00:06:39.263 CRC-32C seed: 0 00:06:39.263 Vector size: 4096 bytes 00:06:39.263 Transfer size: 8192 bytes 00:06:39.263 Vector count 2 00:06:39.263 Module: software 00:06:39.263 Queue depth: 32 00:06:39.263 Allocate depth: 32 00:06:39.263 # threads/core: 1 00:06:39.263 Run time: 1 seconds 00:06:39.263 Verify: Yes 00:06:39.263 00:06:39.263 Running for 1 seconds... 00:06:39.263 00:06:39.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.263 ------------------------------------------------------------------------------------ 00:06:39.263 0,0 219360/s 1713 MiB/s 0 0 00:06:39.263 ==================================================================================== 00:06:39.263 Total 219360/s 856 MiB/s 0 0' 00:06:39.263 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.263 00:40:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:39.263 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.263 00:40:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.263 00:40:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:39.263 00:40:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.263 00:40:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.263 00:40:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.263 00:40:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.263 00:40:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.263 00:40:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.263 00:40:51 -- accel/accel.sh@42 -- # jq -r . 00:06:39.263 [2024-12-03 00:40:51.582889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.263 [2024-12-03 00:40:51.582995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70767 ] 00:06:39.264 [2024-12-03 00:40:51.711778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.264 [2024-12-03 00:40:51.763173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=0x1 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=0 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=software 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=32 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=32 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=1 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val=Yes 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.549 00:40:51 -- accel/accel.sh@21 -- # val= 00:06:39.549 00:40:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # IFS=: 00:06:39.549 00:40:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@21 -- # val= 00:06:40.486 00:40:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # IFS=: 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@21 -- # val= 00:06:40.486 00:40:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # IFS=: 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@21 -- # val= 00:06:40.486 00:40:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # IFS=: 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@21 -- # val= 00:06:40.486 00:40:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # IFS=: 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@21 -- # val= 00:06:40.486 00:40:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # IFS=: 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@21 -- # val= 00:06:40.486 00:40:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # IFS=: 00:06:40.486 00:40:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.486 00:40:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.486 00:40:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:40.486 00:40:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.486 00:06:40.486 real 0m2.796s 00:06:40.486 user 0m2.375s 00:06:40.486 sys 0m0.223s 00:06:40.486 00:40:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.486 00:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:40.487 ************************************ 00:06:40.487 END TEST accel_copy_crc32c_C2 00:06:40.487 ************************************ 00:06:40.487 00:40:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:40.487 00:40:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.487 00:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.487 00:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:40.487 ************************************ 00:06:40.487 START TEST accel_dualcast 00:06:40.487 ************************************ 00:06:40.487 00:40:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:40.487 00:40:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.487 00:40:52 -- accel/accel.sh@17 -- # local accel_module 00:06:40.746 00:40:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:40.746 00:40:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.746 00:40:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.746 00:40:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.746 00:40:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.746 00:40:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.746 00:40:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.746 00:40:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.746 00:40:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.746 00:40:53 -- accel/accel.sh@42 -- # jq -r . 00:06:40.746 [2024-12-03 00:40:53.017855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.746 [2024-12-03 00:40:53.017923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70796 ] 00:06:40.746 [2024-12-03 00:40:53.148900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.746 [2024-12-03 00:40:53.207404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.125 00:40:54 -- accel/accel.sh@18 -- # out=' 00:06:42.125 SPDK Configuration: 00:06:42.125 Core mask: 0x1 00:06:42.125 00:06:42.125 Accel Perf Configuration: 00:06:42.125 Workload Type: dualcast 00:06:42.125 Transfer size: 4096 bytes 00:06:42.125 Vector count 1 00:06:42.125 Module: software 00:06:42.125 Queue depth: 32 00:06:42.125 Allocate depth: 32 00:06:42.125 # threads/core: 1 00:06:42.125 Run time: 1 seconds 00:06:42.125 Verify: Yes 00:06:42.125 00:06:42.125 Running for 1 seconds... 00:06:42.125 00:06:42.125 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.125 ------------------------------------------------------------------------------------ 00:06:42.125 0,0 424192/s 1657 MiB/s 0 0 00:06:42.125 ==================================================================================== 00:06:42.125 Total 424192/s 1657 MiB/s 0 0' 00:06:42.125 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.125 00:40:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:42.125 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.125 00:40:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.125 00:40:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:42.125 00:40:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.125 00:40:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.125 00:40:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.125 00:40:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.125 00:40:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.125 00:40:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.125 00:40:54 -- accel/accel.sh@42 -- # jq -r . 00:06:42.125 [2024-12-03 00:40:54.410845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.125 [2024-12-03 00:40:54.410949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70816 ] 00:06:42.125 [2024-12-03 00:40:54.544314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.125 [2024-12-03 00:40:54.594880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=0x1 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=dualcast 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=software 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=32 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=32 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=1 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val=Yes 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 00:40:54 -- accel/accel.sh@21 -- # val= 00:06:42.385 00:40:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 00:40:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@21 -- # val= 00:06:43.321 00:40:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # IFS=: 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@21 -- # val= 00:06:43.321 00:40:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # IFS=: 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@21 -- # val= 00:06:43.321 00:40:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # IFS=: 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@21 -- # val= 00:06:43.321 00:40:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # IFS=: 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@21 -- # val= 00:06:43.321 00:40:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # IFS=: 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@21 -- # val= 00:06:43.321 00:40:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # IFS=: 00:06:43.321 00:40:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.321 00:40:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.321 00:40:55 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:43.321 00:40:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.321 00:06:43.321 real 0m2.783s 00:06:43.321 user 0m2.373s 00:06:43.321 sys 0m0.210s 00:06:43.321 00:40:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.321 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:06:43.321 ************************************ 00:06:43.321 END TEST accel_dualcast 00:06:43.321 ************************************ 00:06:43.321 00:40:55 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:43.321 00:40:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.321 00:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.321 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:06:43.321 ************************************ 00:06:43.321 START TEST accel_compare 00:06:43.321 ************************************ 00:06:43.321 00:40:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:43.321 00:40:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.321 00:40:55 -- accel/accel.sh@17 -- # local accel_module 00:06:43.321 00:40:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:43.321 00:40:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:43.321 00:40:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.321 00:40:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.321 00:40:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.321 00:40:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.321 00:40:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.321 00:40:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.321 00:40:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.321 00:40:55 -- accel/accel.sh@42 -- # jq -r . 00:06:43.579 [2024-12-03 00:40:55.847390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.579 [2024-12-03 00:40:55.847506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70850 ] 00:06:43.579 [2024-12-03 00:40:55.976520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.579 [2024-12-03 00:40:56.029947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.956 00:40:57 -- accel/accel.sh@18 -- # out=' 00:06:44.956 SPDK Configuration: 00:06:44.956 Core mask: 0x1 00:06:44.956 00:06:44.956 Accel Perf Configuration: 00:06:44.956 Workload Type: compare 00:06:44.956 Transfer size: 4096 bytes 00:06:44.956 Vector count 1 00:06:44.956 Module: software 00:06:44.956 Queue depth: 32 00:06:44.956 Allocate depth: 32 00:06:44.956 # threads/core: 1 00:06:44.956 Run time: 1 seconds 00:06:44.956 Verify: Yes 00:06:44.956 00:06:44.956 Running for 1 seconds... 00:06:44.956 00:06:44.957 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.957 ------------------------------------------------------------------------------------ 00:06:44.957 0,0 560480/s 2189 MiB/s 0 0 00:06:44.957 ==================================================================================== 00:06:44.957 Total 560480/s 2189 MiB/s 0 0' 00:06:44.957 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.957 00:40:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:44.957 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.957 00:40:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:44.957 00:40:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.957 00:40:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.957 00:40:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.957 00:40:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.957 00:40:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.957 00:40:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.957 00:40:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.957 00:40:57 -- accel/accel.sh@42 -- # jq -r . 00:06:44.957 [2024-12-03 00:40:57.248987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.957 [2024-12-03 00:40:57.249077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70865 ] 00:06:44.957 [2024-12-03 00:40:57.382009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.957 [2024-12-03 00:40:57.434939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val=0x1 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val=compare 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.215 00:40:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.215 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.215 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val=software 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val=32 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val=32 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val=1 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val=Yes 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.216 00:40:57 -- accel/accel.sh@21 -- # val= 00:06:45.216 00:40:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.216 00:40:57 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@21 -- # val= 00:06:46.153 00:40:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # IFS=: 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@21 -- # val= 00:06:46.153 00:40:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # IFS=: 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@21 -- # val= 00:06:46.153 00:40:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # IFS=: 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@21 -- # val= 00:06:46.153 00:40:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # IFS=: 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@21 -- # val= 00:06:46.153 00:40:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # IFS=: 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@21 -- # val= 00:06:46.153 00:40:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # IFS=: 00:06:46.153 00:40:58 -- accel/accel.sh@20 -- # read -r var val 00:06:46.153 00:40:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.153 00:40:58 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:46.153 00:40:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.153 00:06:46.153 real 0m2.790s 00:06:46.153 user 0m2.389s 00:06:46.153 sys 0m0.200s 00:06:46.153 00:40:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.153 00:40:58 -- common/autotest_common.sh@10 -- # set +x 00:06:46.153 ************************************ 00:06:46.153 END TEST accel_compare 00:06:46.153 ************************************ 00:06:46.153 00:40:58 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:46.153 00:40:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.153 00:40:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.153 00:40:58 -- common/autotest_common.sh@10 -- # set +x 00:06:46.412 ************************************ 00:06:46.412 START TEST accel_xor 00:06:46.412 ************************************ 00:06:46.412 00:40:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:46.412 00:40:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.412 00:40:58 -- accel/accel.sh@17 -- # local accel_module 00:06:46.412 00:40:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:46.412 00:40:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.412 00:40:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.412 00:40:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.412 00:40:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.412 00:40:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.412 00:40:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.412 00:40:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.412 00:40:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.412 00:40:58 -- accel/accel.sh@42 -- # jq -r . 00:06:46.412 [2024-12-03 00:40:58.695174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.412 [2024-12-03 00:40:58.695265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70900 ] 00:06:46.412 [2024-12-03 00:40:58.825547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.412 [2024-12-03 00:40:58.896720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.789 00:41:00 -- accel/accel.sh@18 -- # out=' 00:06:47.789 SPDK Configuration: 00:06:47.789 Core mask: 0x1 00:06:47.789 00:06:47.789 Accel Perf Configuration: 00:06:47.789 Workload Type: xor 00:06:47.789 Source buffers: 2 00:06:47.789 Transfer size: 4096 bytes 00:06:47.789 Vector count 1 00:06:47.789 Module: software 00:06:47.789 Queue depth: 32 00:06:47.789 Allocate depth: 32 00:06:47.789 # threads/core: 1 00:06:47.789 Run time: 1 seconds 00:06:47.789 Verify: Yes 00:06:47.789 00:06:47.789 Running for 1 seconds... 00:06:47.789 00:06:47.789 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.789 ------------------------------------------------------------------------------------ 00:06:47.789 0,0 293984/s 1148 MiB/s 0 0 00:06:47.789 ==================================================================================== 00:06:47.789 Total 293984/s 1148 MiB/s 0 0' 00:06:47.789 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.789 00:41:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:47.789 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.789 00:41:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.789 00:41:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.789 00:41:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.789 00:41:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.789 00:41:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.789 00:41:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.789 00:41:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.789 00:41:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.789 00:41:00 -- accel/accel.sh@42 -- # jq -r . 00:06:47.789 [2024-12-03 00:41:00.105054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.789 [2024-12-03 00:41:00.105151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70919 ] 00:06:47.789 [2024-12-03 00:41:00.237246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.789 [2024-12-03 00:41:00.289866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=0x1 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=xor 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=2 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=software 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=32 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=32 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val=1 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.048 00:41:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.048 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.048 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.049 00:41:00 -- accel/accel.sh@21 -- # val=Yes 00:06:48.049 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.049 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.049 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.049 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.049 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.049 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.049 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.049 00:41:00 -- accel/accel.sh@21 -- # val= 00:06:48.049 00:41:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.049 00:41:00 -- accel/accel.sh@20 -- # IFS=: 00:06:48.049 00:41:00 -- accel/accel.sh@20 -- # read -r var val 00:06:48.985 00:41:01 -- accel/accel.sh@21 -- # val= 00:06:48.985 00:41:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.985 00:41:01 -- accel/accel.sh@21 -- # val= 00:06:48.985 00:41:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.985 00:41:01 -- accel/accel.sh@21 -- # val= 00:06:48.985 00:41:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.985 00:41:01 -- accel/accel.sh@21 -- # val= 00:06:48.985 00:41:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.985 00:41:01 -- accel/accel.sh@21 -- # val= 00:06:48.985 00:41:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.985 ************************************ 00:06:48.985 END TEST accel_xor 00:06:48.985 ************************************ 00:06:48.985 00:41:01 -- accel/accel.sh@21 -- # val= 00:06:48.985 00:41:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.985 00:41:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.986 00:41:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.986 00:41:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:48.986 00:41:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.986 00:06:48.986 real 0m2.807s 00:06:48.986 user 0m2.393s 00:06:48.986 sys 0m0.216s 00:06:48.986 00:41:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.986 00:41:01 -- common/autotest_common.sh@10 -- # set +x 00:06:49.244 00:41:01 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:49.245 00:41:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:49.245 00:41:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.245 00:41:01 -- common/autotest_common.sh@10 -- # set +x 00:06:49.245 ************************************ 00:06:49.245 START TEST accel_xor 00:06:49.245 ************************************ 00:06:49.245 00:41:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:49.245 00:41:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.245 00:41:01 -- accel/accel.sh@17 -- # local accel_module 00:06:49.245 00:41:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.245 00:41:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.245 00:41:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.245 00:41:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.245 00:41:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.245 00:41:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.245 00:41:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.245 00:41:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.245 00:41:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.245 00:41:01 -- accel/accel.sh@42 -- # jq -r . 00:06:49.245 [2024-12-03 00:41:01.560607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.245 [2024-12-03 00:41:01.560707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:06:49.245 [2024-12-03 00:41:01.689226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.245 [2024-12-03 00:41:01.741099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.621 00:41:02 -- accel/accel.sh@18 -- # out=' 00:06:50.621 SPDK Configuration: 00:06:50.621 Core mask: 0x1 00:06:50.621 00:06:50.621 Accel Perf Configuration: 00:06:50.621 Workload Type: xor 00:06:50.621 Source buffers: 3 00:06:50.621 Transfer size: 4096 bytes 00:06:50.621 Vector count 1 00:06:50.621 Module: software 00:06:50.621 Queue depth: 32 00:06:50.621 Allocate depth: 32 00:06:50.621 # threads/core: 1 00:06:50.621 Run time: 1 seconds 00:06:50.621 Verify: Yes 00:06:50.621 00:06:50.621 Running for 1 seconds... 00:06:50.621 00:06:50.621 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.621 ------------------------------------------------------------------------------------ 00:06:50.621 0,0 280128/s 1094 MiB/s 0 0 00:06:50.621 ==================================================================================== 00:06:50.621 Total 280128/s 1094 MiB/s 0 0' 00:06:50.621 00:41:02 -- accel/accel.sh@20 -- # IFS=: 00:06:50.621 00:41:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:50.621 00:41:02 -- accel/accel.sh@20 -- # read -r var val 00:06:50.621 00:41:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:50.621 00:41:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.621 00:41:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.621 00:41:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.621 00:41:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.621 00:41:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.621 00:41:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.621 00:41:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.621 00:41:02 -- accel/accel.sh@42 -- # jq -r . 00:06:50.621 [2024-12-03 00:41:02.946531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.621 [2024-12-03 00:41:02.946788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70968 ] 00:06:50.621 [2024-12-03 00:41:03.079771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.621 [2024-12-03 00:41:03.134879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val=0x1 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val=xor 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val=3 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val=software 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val=32 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.879 00:41:03 -- accel/accel.sh@21 -- # val=32 00:06:50.879 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.879 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.880 00:41:03 -- accel/accel.sh@21 -- # val=1 00:06:50.880 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.880 00:41:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.880 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.880 00:41:03 -- accel/accel.sh@21 -- # val=Yes 00:06:50.880 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.880 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.880 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.880 00:41:03 -- accel/accel.sh@21 -- # val= 00:06:50.880 00:41:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.880 00:41:03 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@21 -- # val= 00:06:51.814 00:41:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@21 -- # val= 00:06:51.814 00:41:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@21 -- # val= 00:06:51.814 00:41:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@21 -- # val= 00:06:51.814 00:41:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@21 -- # val= 00:06:51.814 00:41:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@21 -- # val= 00:06:51.814 00:41:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.814 00:41:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.814 00:41:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.814 00:41:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:51.814 00:41:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.814 00:06:51.814 real 0m2.788s 00:06:51.814 user 0m2.386s 00:06:51.814 sys 0m0.200s 00:06:51.814 00:41:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.814 ************************************ 00:06:51.814 END TEST accel_xor 00:06:51.814 ************************************ 00:06:51.814 00:41:04 -- common/autotest_common.sh@10 -- # set +x 00:06:52.073 00:41:04 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:52.073 00:41:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:52.073 00:41:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.073 00:41:04 -- common/autotest_common.sh@10 -- # set +x 00:06:52.073 ************************************ 00:06:52.073 START TEST accel_dif_verify 00:06:52.073 ************************************ 00:06:52.073 00:41:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:52.073 00:41:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.073 00:41:04 -- accel/accel.sh@17 -- # local accel_module 00:06:52.073 00:41:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:52.073 00:41:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.073 00:41:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.073 00:41:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.073 00:41:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.073 00:41:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.073 00:41:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.073 00:41:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.073 00:41:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.073 00:41:04 -- accel/accel.sh@42 -- # jq -r . 00:06:52.073 [2024-12-03 00:41:04.400041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.073 [2024-12-03 00:41:04.400144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71002 ] 00:06:52.073 [2024-12-03 00:41:04.536041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.332 [2024-12-03 00:41:04.596286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.706 00:41:05 -- accel/accel.sh@18 -- # out=' 00:06:53.706 SPDK Configuration: 00:06:53.706 Core mask: 0x1 00:06:53.706 00:06:53.706 Accel Perf Configuration: 00:06:53.706 Workload Type: dif_verify 00:06:53.706 Vector size: 4096 bytes 00:06:53.706 Transfer size: 4096 bytes 00:06:53.706 Block size: 512 bytes 00:06:53.706 Metadata size: 8 bytes 00:06:53.706 Vector count 1 00:06:53.706 Module: software 00:06:53.706 Queue depth: 32 00:06:53.706 Allocate depth: 32 00:06:53.706 # threads/core: 1 00:06:53.706 Run time: 1 seconds 00:06:53.706 Verify: No 00:06:53.706 00:06:53.706 Running for 1 seconds... 00:06:53.706 00:06:53.706 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.706 ------------------------------------------------------------------------------------ 00:06:53.706 0,0 125632/s 498 MiB/s 0 0 00:06:53.706 ==================================================================================== 00:06:53.706 Total 125632/s 490 MiB/s 0 0' 00:06:53.706 00:41:05 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:53.706 00:41:05 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:53.706 00:41:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.706 00:41:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.706 00:41:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.706 00:41:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.706 00:41:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.706 00:41:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.706 00:41:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.706 00:41:05 -- accel/accel.sh@42 -- # jq -r . 00:06:53.706 [2024-12-03 00:41:05.821385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.706 [2024-12-03 00:41:05.821506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:06:53.706 [2024-12-03 00:41:05.960066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.706 [2024-12-03 00:41:06.017138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=0x1 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=dif_verify 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=software 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=32 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=32 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=1 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val=No 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.706 00:41:06 -- accel/accel.sh@21 -- # val= 00:06:53.706 00:41:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.706 00:41:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@21 -- # val= 00:06:55.157 00:41:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@21 -- # val= 00:06:55.157 00:41:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@21 -- # val= 00:06:55.157 00:41:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@21 -- # val= 00:06:55.157 00:41:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@21 -- # val= 00:06:55.157 00:41:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@21 -- # val= 00:06:55.157 00:41:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.157 00:41:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.157 00:41:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.157 00:41:07 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:55.157 00:41:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.157 00:06:55.157 real 0m2.831s 00:06:55.157 user 0m2.408s 00:06:55.157 sys 0m0.224s 00:06:55.157 00:41:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.157 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:06:55.157 ************************************ 00:06:55.157 END TEST accel_dif_verify 00:06:55.157 ************************************ 00:06:55.157 00:41:07 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:55.157 00:41:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:55.157 00:41:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.157 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:06:55.157 ************************************ 00:06:55.157 START TEST accel_dif_generate 00:06:55.157 ************************************ 00:06:55.157 00:41:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:55.157 00:41:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.157 00:41:07 -- accel/accel.sh@17 -- # local accel_module 00:06:55.157 00:41:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:55.157 00:41:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.157 00:41:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.157 00:41:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.157 00:41:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.157 00:41:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.157 00:41:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.157 00:41:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.157 00:41:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.157 00:41:07 -- accel/accel.sh@42 -- # jq -r . 00:06:55.157 [2024-12-03 00:41:07.277153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.157 [2024-12-03 00:41:07.277246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:06:55.157 [2024-12-03 00:41:07.415595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.157 [2024-12-03 00:41:07.474577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.535 00:41:08 -- accel/accel.sh@18 -- # out=' 00:06:56.535 SPDK Configuration: 00:06:56.535 Core mask: 0x1 00:06:56.535 00:06:56.535 Accel Perf Configuration: 00:06:56.535 Workload Type: dif_generate 00:06:56.535 Vector size: 4096 bytes 00:06:56.535 Transfer size: 4096 bytes 00:06:56.535 Block size: 512 bytes 00:06:56.535 Metadata size: 8 bytes 00:06:56.535 Vector count 1 00:06:56.535 Module: software 00:06:56.535 Queue depth: 32 00:06:56.535 Allocate depth: 32 00:06:56.535 # threads/core: 1 00:06:56.535 Run time: 1 seconds 00:06:56.535 Verify: No 00:06:56.535 00:06:56.535 Running for 1 seconds... 00:06:56.535 00:06:56.536 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.536 ------------------------------------------------------------------------------------ 00:06:56.536 0,0 150592/s 597 MiB/s 0 0 00:06:56.536 ==================================================================================== 00:06:56.536 Total 150592/s 588 MiB/s 0 0' 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.536 00:41:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.536 00:41:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.536 00:41:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.536 00:41:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.536 00:41:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.536 00:41:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.536 00:41:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.536 00:41:08 -- accel/accel.sh@42 -- # jq -r . 00:06:56.536 [2024-12-03 00:41:08.685327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.536 [2024-12-03 00:41:08.685442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71078 ] 00:06:56.536 [2024-12-03 00:41:08.823233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.536 [2024-12-03 00:41:08.879583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=0x1 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=dif_generate 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=software 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=32 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=32 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=1 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val=No 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.536 00:41:08 -- accel/accel.sh@21 -- # val= 00:06:56.536 00:41:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.536 00:41:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@21 -- # val= 00:06:57.915 00:41:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@21 -- # val= 00:06:57.915 00:41:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@21 -- # val= 00:06:57.915 00:41:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@21 -- # val= 00:06:57.915 ************************************ 00:06:57.915 END TEST accel_dif_generate 00:06:57.915 ************************************ 00:06:57.915 00:41:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@21 -- # val= 00:06:57.915 00:41:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@21 -- # val= 00:06:57.915 00:41:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.915 00:41:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.915 00:41:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.915 00:41:10 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:57.915 00:41:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.915 00:06:57.915 real 0m2.817s 00:06:57.915 user 0m2.396s 00:06:57.915 sys 0m0.224s 00:06:57.915 00:41:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.915 00:41:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.915 00:41:10 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.915 00:41:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:57.915 00:41:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.915 00:41:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.915 ************************************ 00:06:57.915 START TEST accel_dif_generate_copy 00:06:57.915 ************************************ 00:06:57.915 00:41:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.915 00:41:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.915 00:41:10 -- accel/accel.sh@17 -- # local accel_module 00:06:57.915 00:41:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.915 00:41:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.915 00:41:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.915 00:41:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.915 00:41:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.915 00:41:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.915 00:41:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.915 00:41:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.915 00:41:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.915 00:41:10 -- accel/accel.sh@42 -- # jq -r . 00:06:57.915 [2024-12-03 00:41:10.150256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.915 [2024-12-03 00:41:10.150348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71107 ] 00:06:57.915 [2024-12-03 00:41:10.287433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.915 [2024-12-03 00:41:10.348011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.290 00:41:11 -- accel/accel.sh@18 -- # out=' 00:06:59.290 SPDK Configuration: 00:06:59.290 Core mask: 0x1 00:06:59.290 00:06:59.290 Accel Perf Configuration: 00:06:59.290 Workload Type: dif_generate_copy 00:06:59.290 Vector size: 4096 bytes 00:06:59.290 Transfer size: 4096 bytes 00:06:59.290 Vector count 1 00:06:59.290 Module: software 00:06:59.290 Queue depth: 32 00:06:59.290 Allocate depth: 32 00:06:59.290 # threads/core: 1 00:06:59.290 Run time: 1 seconds 00:06:59.290 Verify: No 00:06:59.290 00:06:59.290 Running for 1 seconds... 00:06:59.290 00:06:59.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.290 ------------------------------------------------------------------------------------ 00:06:59.290 0,0 116896/s 463 MiB/s 0 0 00:06:59.290 ==================================================================================== 00:06:59.290 Total 116896/s 456 MiB/s 0 0' 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:59.290 00:41:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.290 00:41:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.290 00:41:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.290 00:41:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.290 00:41:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.290 00:41:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.290 00:41:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.290 00:41:11 -- accel/accel.sh@42 -- # jq -r . 00:06:59.290 [2024-12-03 00:41:11.555332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.290 [2024-12-03 00:41:11.555597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:06:59.290 [2024-12-03 00:41:11.686270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.290 [2024-12-03 00:41:11.737595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val=0x1 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val=software 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val=32 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.290 00:41:11 -- accel/accel.sh@21 -- # val=32 00:06:59.290 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.290 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.549 00:41:11 -- accel/accel.sh@21 -- # val=1 00:06:59.549 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.549 00:41:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.549 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.549 00:41:11 -- accel/accel.sh@21 -- # val=No 00:06:59.549 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.549 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.549 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.549 00:41:11 -- accel/accel.sh@21 -- # val= 00:06:59.549 00:41:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.549 00:41:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@21 -- # val= 00:07:00.492 00:41:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@21 -- # val= 00:07:00.492 00:41:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@21 -- # val= 00:07:00.492 00:41:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@21 -- # val= 00:07:00.492 00:41:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@21 -- # val= 00:07:00.492 00:41:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@21 -- # val= 00:07:00.492 00:41:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.492 00:41:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.492 00:41:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.492 00:41:12 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:00.492 00:41:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.492 00:07:00.492 real 0m2.794s 00:07:00.492 user 0m2.385s 00:07:00.492 sys 0m0.213s 00:07:00.492 00:41:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.492 ************************************ 00:07:00.492 END TEST accel_dif_generate_copy 00:07:00.492 ************************************ 00:07:00.492 00:41:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.492 00:41:12 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:00.492 00:41:12 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.492 00:41:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:00.492 00:41:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.492 00:41:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.492 ************************************ 00:07:00.492 START TEST accel_comp 00:07:00.492 ************************************ 00:07:00.492 00:41:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.493 00:41:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.493 00:41:12 -- accel/accel.sh@17 -- # local accel_module 00:07:00.493 00:41:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.493 00:41:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.493 00:41:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.493 00:41:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.493 00:41:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.493 00:41:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.493 00:41:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.493 00:41:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.493 00:41:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.493 00:41:12 -- accel/accel.sh@42 -- # jq -r . 00:07:00.493 [2024-12-03 00:41:12.997732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.493 [2024-12-03 00:41:12.997819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71161 ] 00:07:00.750 [2024-12-03 00:41:13.129041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.750 [2024-12-03 00:41:13.183007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.125 00:41:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.125 00:07:02.125 SPDK Configuration: 00:07:02.125 Core mask: 0x1 00:07:02.125 00:07:02.125 Accel Perf Configuration: 00:07:02.125 Workload Type: compress 00:07:02.125 Transfer size: 4096 bytes 00:07:02.125 Vector count 1 00:07:02.125 Module: software 00:07:02.125 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.125 Queue depth: 32 00:07:02.125 Allocate depth: 32 00:07:02.125 # threads/core: 1 00:07:02.125 Run time: 1 seconds 00:07:02.125 Verify: No 00:07:02.125 00:07:02.125 Running for 1 seconds... 00:07:02.125 00:07:02.125 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.125 ------------------------------------------------------------------------------------ 00:07:02.125 0,0 59616/s 248 MiB/s 0 0 00:07:02.125 ==================================================================================== 00:07:02.125 Total 59616/s 232 MiB/s 0 0' 00:07:02.125 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.125 00:41:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.125 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.125 00:41:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.125 00:41:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.125 00:41:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.125 00:41:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.125 00:41:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.125 00:41:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.125 00:41:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.125 00:41:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.125 00:41:14 -- accel/accel.sh@42 -- # jq -r . 00:07:02.126 [2024-12-03 00:41:14.390310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.126 [2024-12-03 00:41:14.390406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:07:02.126 [2024-12-03 00:41:14.526864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.126 [2024-12-03 00:41:14.579919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=0x1 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=compress 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=software 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=32 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=32 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=1 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val=No 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 00:41:14 -- accel/accel.sh@21 -- # val= 00:07:02.385 00:41:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 00:41:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@21 -- # val= 00:07:03.319 00:41:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@21 -- # val= 00:07:03.319 00:41:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@21 -- # val= 00:07:03.319 00:41:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@21 -- # val= 00:07:03.319 00:41:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@21 -- # val= 00:07:03.319 00:41:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@21 -- # val= 00:07:03.319 00:41:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.319 00:41:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.319 00:41:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.319 00:41:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:03.319 00:41:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.319 00:07:03.319 real 0m2.817s 00:07:03.319 user 0m2.400s 00:07:03.319 sys 0m0.220s 00:07:03.319 00:41:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.319 00:41:15 -- common/autotest_common.sh@10 -- # set +x 00:07:03.319 ************************************ 00:07:03.319 END TEST accel_comp 00:07:03.319 ************************************ 00:07:03.577 00:41:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.577 00:41:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:03.577 00:41:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.577 00:41:15 -- common/autotest_common.sh@10 -- # set +x 00:07:03.577 ************************************ 00:07:03.577 START TEST accel_decomp 00:07:03.577 ************************************ 00:07:03.577 00:41:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.577 00:41:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.577 00:41:15 -- accel/accel.sh@17 -- # local accel_module 00:07:03.577 00:41:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.577 00:41:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.577 00:41:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.577 00:41:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.577 00:41:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.577 00:41:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.577 00:41:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.577 00:41:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.577 00:41:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.577 00:41:15 -- accel/accel.sh@42 -- # jq -r . 00:07:03.577 [2024-12-03 00:41:15.875073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.577 [2024-12-03 00:41:15.875360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71215 ] 00:07:03.577 [2024-12-03 00:41:16.011573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.577 [2024-12-03 00:41:16.071400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.954 00:41:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.954 00:07:04.954 SPDK Configuration: 00:07:04.954 Core mask: 0x1 00:07:04.954 00:07:04.954 Accel Perf Configuration: 00:07:04.954 Workload Type: decompress 00:07:04.954 Transfer size: 4096 bytes 00:07:04.954 Vector count 1 00:07:04.954 Module: software 00:07:04.954 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.954 Queue depth: 32 00:07:04.954 Allocate depth: 32 00:07:04.954 # threads/core: 1 00:07:04.954 Run time: 1 seconds 00:07:04.954 Verify: Yes 00:07:04.954 00:07:04.954 Running for 1 seconds... 00:07:04.954 00:07:04.954 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.954 ------------------------------------------------------------------------------------ 00:07:04.954 0,0 84064/s 154 MiB/s 0 0 00:07:04.954 ==================================================================================== 00:07:04.954 Total 84064/s 328 MiB/s 0 0' 00:07:04.954 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.954 00:41:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.954 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.954 00:41:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.954 00:41:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.954 00:41:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.954 00:41:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.954 00:41:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.954 00:41:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.954 00:41:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.954 00:41:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.954 00:41:17 -- accel/accel.sh@42 -- # jq -r . 00:07:04.954 [2024-12-03 00:41:17.300203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.954 [2024-12-03 00:41:17.300483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:07:04.954 [2024-12-03 00:41:17.438812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.214 [2024-12-03 00:41:17.494401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=0x1 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=decompress 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=software 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=32 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=32 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=1 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val=Yes 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.214 00:41:17 -- accel/accel.sh@21 -- # val= 00:07:05.214 00:41:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.214 00:41:17 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@21 -- # val= 00:07:06.593 00:41:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@21 -- # val= 00:07:06.593 00:41:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@21 -- # val= 00:07:06.593 00:41:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@21 -- # val= 00:07:06.593 00:41:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@21 -- # val= 00:07:06.593 00:41:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@21 -- # val= 00:07:06.593 00:41:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.593 00:41:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.593 00:41:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.593 00:41:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.593 00:41:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.593 00:07:06.593 real 0m2.830s 00:07:06.593 user 0m2.403s 00:07:06.593 sys 0m0.226s 00:07:06.593 ************************************ 00:07:06.593 END TEST accel_decomp 00:07:06.593 ************************************ 00:07:06.593 00:41:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.593 00:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:06.593 00:41:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.593 00:41:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:06.593 00:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.593 00:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:06.593 ************************************ 00:07:06.593 START TEST accel_decmop_full 00:07:06.593 ************************************ 00:07:06.593 00:41:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.593 00:41:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.593 00:41:18 -- accel/accel.sh@17 -- # local accel_module 00:07:06.593 00:41:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.593 00:41:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.593 00:41:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.593 00:41:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.593 00:41:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.593 00:41:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.593 00:41:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.593 00:41:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.593 00:41:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.593 00:41:18 -- accel/accel.sh@42 -- # jq -r . 00:07:06.593 [2024-12-03 00:41:18.752878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.593 [2024-12-03 00:41:18.752973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71264 ] 00:07:06.593 [2024-12-03 00:41:18.886327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.593 [2024-12-03 00:41:18.944565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.971 00:41:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.971 00:07:07.971 SPDK Configuration: 00:07:07.971 Core mask: 0x1 00:07:07.971 00:07:07.971 Accel Perf Configuration: 00:07:07.971 Workload Type: decompress 00:07:07.971 Transfer size: 111250 bytes 00:07:07.971 Vector count 1 00:07:07.971 Module: software 00:07:07.971 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.971 Queue depth: 32 00:07:07.971 Allocate depth: 32 00:07:07.971 # threads/core: 1 00:07:07.971 Run time: 1 seconds 00:07:07.971 Verify: Yes 00:07:07.971 00:07:07.971 Running for 1 seconds... 00:07:07.971 00:07:07.971 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.971 ------------------------------------------------------------------------------------ 00:07:07.971 0,0 5600/s 231 MiB/s 0 0 00:07:07.971 ==================================================================================== 00:07:07.971 Total 5600/s 594 MiB/s 0 0' 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.971 00:41:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.971 00:41:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.971 00:41:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.971 00:41:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.971 00:41:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.971 00:41:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.971 00:41:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.971 00:41:20 -- accel/accel.sh@42 -- # jq -r . 00:07:07.971 [2024-12-03 00:41:20.165906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.971 [2024-12-03 00:41:20.166183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71283 ] 00:07:07.971 [2024-12-03 00:41:20.294580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.971 [2024-12-03 00:41:20.344655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=0x1 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=decompress 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=software 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=32 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=32 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=1 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val=Yes 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.971 00:41:20 -- accel/accel.sh@21 -- # val= 00:07:07.971 00:41:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.971 00:41:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 00:41:21 -- accel/accel.sh@21 -- # val= 00:07:09.382 00:41:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # IFS=: 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # read -r var val 00:07:09.383 00:41:21 -- accel/accel.sh@21 -- # val= 00:07:09.383 00:41:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # IFS=: 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # read -r var val 00:07:09.383 00:41:21 -- accel/accel.sh@21 -- # val= 00:07:09.383 00:41:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # IFS=: 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # read -r var val 00:07:09.383 00:41:21 -- accel/accel.sh@21 -- # val= 00:07:09.383 00:41:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # IFS=: 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # read -r var val 00:07:09.383 00:41:21 -- accel/accel.sh@21 -- # val= 00:07:09.383 00:41:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # IFS=: 00:07:09.383 ************************************ 00:07:09.383 END TEST accel_decmop_full 00:07:09.383 ************************************ 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # read -r var val 00:07:09.383 00:41:21 -- accel/accel.sh@21 -- # val= 00:07:09.383 00:41:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # IFS=: 00:07:09.383 00:41:21 -- accel/accel.sh@20 -- # read -r var val 00:07:09.383 00:41:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.383 00:41:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.383 00:41:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.383 00:07:09.383 real 0m2.827s 00:07:09.383 user 0m2.412s 00:07:09.383 sys 0m0.214s 00:07:09.383 00:41:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.383 00:41:21 -- common/autotest_common.sh@10 -- # set +x 00:07:09.383 00:41:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.383 00:41:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:09.383 00:41:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.383 00:41:21 -- common/autotest_common.sh@10 -- # set +x 00:07:09.383 ************************************ 00:07:09.383 START TEST accel_decomp_mcore 00:07:09.383 ************************************ 00:07:09.383 00:41:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.383 00:41:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.383 00:41:21 -- accel/accel.sh@17 -- # local accel_module 00:07:09.383 00:41:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.383 00:41:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.383 00:41:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.383 00:41:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.383 00:41:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.383 00:41:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.383 00:41:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.383 00:41:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.383 00:41:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.383 00:41:21 -- accel/accel.sh@42 -- # jq -r . 00:07:09.383 [2024-12-03 00:41:21.636972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.383 [2024-12-03 00:41:21.637211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71312 ] 00:07:09.383 [2024-12-03 00:41:21.767759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.383 [2024-12-03 00:41:21.828719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.383 [2024-12-03 00:41:21.828871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.383 [2024-12-03 00:41:21.828996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.383 [2024-12-03 00:41:21.828997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.759 00:41:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.759 00:07:10.759 SPDK Configuration: 00:07:10.759 Core mask: 0xf 00:07:10.759 00:07:10.759 Accel Perf Configuration: 00:07:10.759 Workload Type: decompress 00:07:10.759 Transfer size: 4096 bytes 00:07:10.759 Vector count 1 00:07:10.759 Module: software 00:07:10.759 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.759 Queue depth: 32 00:07:10.759 Allocate depth: 32 00:07:10.759 # threads/core: 1 00:07:10.759 Run time: 1 seconds 00:07:10.759 Verify: Yes 00:07:10.759 00:07:10.759 Running for 1 seconds... 00:07:10.759 00:07:10.759 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.759 ------------------------------------------------------------------------------------ 00:07:10.759 0,0 66816/s 123 MiB/s 0 0 00:07:10.760 3,0 63040/s 116 MiB/s 0 0 00:07:10.760 2,0 61344/s 113 MiB/s 0 0 00:07:10.760 1,0 64000/s 117 MiB/s 0 0 00:07:10.760 ==================================================================================== 00:07:10.760 Total 255200/s 996 MiB/s 0 0' 00:07:10.760 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 00:41:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.760 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 00:41:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.760 00:41:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.760 00:41:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.760 00:41:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.760 00:41:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.760 00:41:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.760 00:41:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.760 00:41:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.760 00:41:23 -- accel/accel.sh@42 -- # jq -r . 00:07:10.760 [2024-12-03 00:41:23.042306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.760 [2024-12-03 00:41:23.042393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:07:10.760 [2024-12-03 00:41:23.170666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.760 [2024-12-03 00:41:23.222761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.760 [2024-12-03 00:41:23.222905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.760 [2024-12-03 00:41:23.223055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.760 [2024-12-03 00:41:23.223352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.018 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.018 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.018 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=0xf 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=decompress 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=software 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=32 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=32 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=1 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val=Yes 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.019 00:41:23 -- accel/accel.sh@21 -- # val= 00:07:11.019 00:41:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.019 00:41:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@21 -- # val= 00:07:12.396 00:41:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 00:41:24 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 00:41:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.396 00:41:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.396 00:41:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.396 ************************************ 00:07:12.396 END TEST accel_decomp_mcore 00:07:12.396 ************************************ 00:07:12.396 00:07:12.396 real 0m2.896s 00:07:12.396 user 0m9.320s 00:07:12.396 sys 0m0.242s 00:07:12.396 00:41:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.396 00:41:24 -- common/autotest_common.sh@10 -- # set +x 00:07:12.396 00:41:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.397 00:41:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:12.397 00:41:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.397 00:41:24 -- common/autotest_common.sh@10 -- # set +x 00:07:12.397 ************************************ 00:07:12.397 START TEST accel_decomp_full_mcore 00:07:12.397 ************************************ 00:07:12.397 00:41:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.397 00:41:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.397 00:41:24 -- accel/accel.sh@17 -- # local accel_module 00:07:12.397 00:41:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.397 00:41:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.397 00:41:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.397 00:41:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.397 00:41:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.397 00:41:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.397 00:41:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.397 00:41:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.397 00:41:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.397 00:41:24 -- accel/accel.sh@42 -- # jq -r . 00:07:12.397 [2024-12-03 00:41:24.580518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.397 [2024-12-03 00:41:24.580605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71372 ] 00:07:12.397 [2024-12-03 00:41:24.717372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.397 [2024-12-03 00:41:24.778352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.397 [2024-12-03 00:41:24.778496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.397 [2024-12-03 00:41:24.778613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.397 [2024-12-03 00:41:24.778625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.772 00:41:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.772 00:07:13.772 SPDK Configuration: 00:07:13.772 Core mask: 0xf 00:07:13.772 00:07:13.772 Accel Perf Configuration: 00:07:13.772 Workload Type: decompress 00:07:13.772 Transfer size: 111250 bytes 00:07:13.772 Vector count 1 00:07:13.772 Module: software 00:07:13.772 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.772 Queue depth: 32 00:07:13.772 Allocate depth: 32 00:07:13.772 # threads/core: 1 00:07:13.772 Run time: 1 seconds 00:07:13.772 Verify: Yes 00:07:13.772 00:07:13.772 Running for 1 seconds... 00:07:13.772 00:07:13.772 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.772 ------------------------------------------------------------------------------------ 00:07:13.772 0,0 5632/s 232 MiB/s 0 0 00:07:13.772 3,0 5248/s 216 MiB/s 0 0 00:07:13.772 2,0 5152/s 212 MiB/s 0 0 00:07:13.772 1,0 5216/s 215 MiB/s 0 0 00:07:13.772 ==================================================================================== 00:07:13.772 Total 21248/s 2254 MiB/s 0 0' 00:07:13.772 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:13.772 00:41:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.772 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:13.772 00:41:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.772 00:41:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.772 00:41:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.772 00:41:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.772 00:41:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.772 00:41:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.772 00:41:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.772 00:41:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.772 00:41:26 -- accel/accel.sh@42 -- # jq -r . 00:07:13.772 [2024-12-03 00:41:26.080872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.772 [2024-12-03 00:41:26.080934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71395 ] 00:07:13.772 [2024-12-03 00:41:26.212532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.772 [2024-12-03 00:41:26.276542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.772 [2024-12-03 00:41:26.276715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.772 [2024-12-03 00:41:26.276836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.772 [2024-12-03 00:41:26.277083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val=0xf 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val=decompress 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val=software 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.031 00:41:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.031 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.031 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val=32 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val=32 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val=1 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val=Yes 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 00:41:26 -- accel/accel.sh@21 -- # val= 00:07:14.032 00:41:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 00:41:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.408 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.408 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.408 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.408 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.408 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.408 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.408 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.408 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.408 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.408 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.408 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.409 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.409 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.409 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.409 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.409 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.409 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.409 00:41:27 -- accel/accel.sh@21 -- # val= 00:07:15.409 ************************************ 00:07:15.409 END TEST accel_decomp_full_mcore 00:07:15.409 ************************************ 00:07:15.409 00:41:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.409 00:41:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.409 00:41:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.409 00:41:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:15.409 00:41:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.409 00:07:15.409 real 0m3.006s 00:07:15.409 user 0m9.652s 00:07:15.409 sys 0m0.295s 00:07:15.409 00:41:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.409 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:15.409 00:41:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.409 00:41:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:15.409 00:41:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.409 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:15.409 ************************************ 00:07:15.409 START TEST accel_decomp_mthread 00:07:15.409 ************************************ 00:07:15.409 00:41:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.409 00:41:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.409 00:41:27 -- accel/accel.sh@17 -- # local accel_module 00:07:15.409 00:41:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.409 00:41:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.409 00:41:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.409 00:41:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.409 00:41:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.409 00:41:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.409 00:41:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.409 00:41:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.409 00:41:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.409 00:41:27 -- accel/accel.sh@42 -- # jq -r . 00:07:15.409 [2024-12-03 00:41:27.640289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.409 [2024-12-03 00:41:27.640375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71432 ] 00:07:15.409 [2024-12-03 00:41:27.776813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.409 [2024-12-03 00:41:27.838234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.786 00:41:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.786 00:07:16.786 SPDK Configuration: 00:07:16.786 Core mask: 0x1 00:07:16.786 00:07:16.786 Accel Perf Configuration: 00:07:16.786 Workload Type: decompress 00:07:16.786 Transfer size: 4096 bytes 00:07:16.786 Vector count 1 00:07:16.786 Module: software 00:07:16.786 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.786 Queue depth: 32 00:07:16.786 Allocate depth: 32 00:07:16.786 # threads/core: 2 00:07:16.786 Run time: 1 seconds 00:07:16.786 Verify: Yes 00:07:16.786 00:07:16.786 Running for 1 seconds... 00:07:16.786 00:07:16.786 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.786 ------------------------------------------------------------------------------------ 00:07:16.786 0,1 43328/s 79 MiB/s 0 0 00:07:16.786 0,0 43200/s 79 MiB/s 0 0 00:07:16.786 ==================================================================================== 00:07:16.786 Total 86528/s 338 MiB/s 0 0' 00:07:16.786 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:16.786 00:41:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.786 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:16.786 00:41:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.786 00:41:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.786 00:41:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.786 00:41:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.786 00:41:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.786 00:41:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.786 00:41:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.786 00:41:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.786 00:41:29 -- accel/accel.sh@42 -- # jq -r . 00:07:16.786 [2024-12-03 00:41:29.152782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.786 [2024-12-03 00:41:29.152876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71452 ] 00:07:16.786 [2024-12-03 00:41:29.289307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.045 [2024-12-03 00:41:29.354897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val=0x1 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val=decompress 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.045 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.045 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.045 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val=software 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val=32 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val=32 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val=2 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val=Yes 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.046 00:41:29 -- accel/accel.sh@21 -- # val= 00:07:17.046 00:41:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.046 00:41:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.422 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.422 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.422 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.422 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.422 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.423 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.423 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.423 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.423 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.423 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.423 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.423 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.423 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.423 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.423 00:41:30 -- accel/accel.sh@21 -- # val= 00:07:18.423 00:41:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # IFS=: 00:07:18.423 00:41:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.423 00:41:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.423 00:41:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.423 00:41:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.423 ************************************ 00:07:18.423 END TEST accel_decomp_mthread 00:07:18.423 ************************************ 00:07:18.423 00:07:18.423 real 0m2.985s 00:07:18.423 user 0m2.521s 00:07:18.423 sys 0m0.258s 00:07:18.423 00:41:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.423 00:41:30 -- common/autotest_common.sh@10 -- # set +x 00:07:18.423 00:41:30 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.423 00:41:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:18.423 00:41:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.423 00:41:30 -- common/autotest_common.sh@10 -- # set +x 00:07:18.423 ************************************ 00:07:18.423 START TEST accel_deomp_full_mthread 00:07:18.423 ************************************ 00:07:18.423 00:41:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.423 00:41:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.423 00:41:30 -- accel/accel.sh@17 -- # local accel_module 00:07:18.423 00:41:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.423 00:41:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.423 00:41:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.423 00:41:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.423 00:41:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.423 00:41:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.423 00:41:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.423 00:41:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.423 00:41:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.423 00:41:30 -- accel/accel.sh@42 -- # jq -r . 00:07:18.423 [2024-12-03 00:41:30.686756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.423 [2024-12-03 00:41:30.686895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71486 ] 00:07:18.423 [2024-12-03 00:41:30.828577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.423 [2024-12-03 00:41:30.899303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.799 00:41:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.799 00:07:19.799 SPDK Configuration: 00:07:19.799 Core mask: 0x1 00:07:19.799 00:07:19.799 Accel Perf Configuration: 00:07:19.799 Workload Type: decompress 00:07:19.799 Transfer size: 111250 bytes 00:07:19.799 Vector count 1 00:07:19.799 Module: software 00:07:19.799 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.799 Queue depth: 32 00:07:19.799 Allocate depth: 32 00:07:19.799 # threads/core: 2 00:07:19.799 Run time: 1 seconds 00:07:19.799 Verify: Yes 00:07:19.799 00:07:19.799 Running for 1 seconds... 00:07:19.799 00:07:19.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.799 ------------------------------------------------------------------------------------ 00:07:19.799 0,1 2848/s 117 MiB/s 0 0 00:07:19.799 0,0 2816/s 116 MiB/s 0 0 00:07:19.799 ==================================================================================== 00:07:19.799 Total 5664/s 600 MiB/s 0 0' 00:07:19.799 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:19.799 00:41:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.799 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:19.799 00:41:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.799 00:41:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.799 00:41:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.799 00:41:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.799 00:41:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.799 00:41:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.799 00:41:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.799 00:41:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.799 00:41:32 -- accel/accel.sh@42 -- # jq -r . 00:07:19.799 [2024-12-03 00:41:32.147941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.799 [2024-12-03 00:41:32.148018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71506 ] 00:07:19.799 [2024-12-03 00:41:32.275682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.059 [2024-12-03 00:41:32.328351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=0x1 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=decompress 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=software 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=32 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=32 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=2 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val=Yes 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.059 00:41:32 -- accel/accel.sh@21 -- # val= 00:07:20.059 00:41:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.059 00:41:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@21 -- # val= 00:07:21.436 00:41:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.436 00:41:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.436 00:41:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.436 00:41:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.436 00:41:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.436 00:07:21.436 real 0m2.883s 00:07:21.436 user 0m2.449s 00:07:21.436 sys 0m0.234s 00:07:21.436 00:41:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.436 ************************************ 00:07:21.436 END TEST accel_deomp_full_mthread 00:07:21.436 ************************************ 00:07:21.436 00:41:33 -- common/autotest_common.sh@10 -- # set +x 00:07:21.436 00:41:33 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:21.436 00:41:33 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.436 00:41:33 -- accel/accel.sh@129 -- # build_accel_config 00:07:21.436 00:41:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:21.436 00:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.436 00:41:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.436 00:41:33 -- common/autotest_common.sh@10 -- # set +x 00:07:21.436 00:41:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.436 00:41:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.436 00:41:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.436 00:41:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.436 00:41:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.436 00:41:33 -- accel/accel.sh@42 -- # jq -r . 00:07:21.436 ************************************ 00:07:21.436 START TEST accel_dif_functional_tests 00:07:21.436 ************************************ 00:07:21.436 00:41:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.436 [2024-12-03 00:41:33.637449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.436 [2024-12-03 00:41:33.637539] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71541 ] 00:07:21.436 [2024-12-03 00:41:33.768572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.436 [2024-12-03 00:41:33.822674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.436 [2024-12-03 00:41:33.822823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.436 [2024-12-03 00:41:33.822826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.436 00:07:21.436 00:07:21.436 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.436 http://cunit.sourceforge.net/ 00:07:21.436 00:07:21.436 00:07:21.436 Suite: accel_dif 00:07:21.436 Test: verify: DIF generated, GUARD check ...passed 00:07:21.436 Test: verify: DIF generated, APPTAG check ...passed 00:07:21.436 Test: verify: DIF generated, REFTAG check ...passed 00:07:21.436 Test: verify: DIF not generated, GUARD check ...[2024-12-03 00:41:33.910576] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.436 passed 00:07:21.436 Test: verify: DIF not generated, APPTAG check ...[2024-12-03 00:41:33.910672] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.436 passed 00:07:21.436 Test: verify: DIF not generated, REFTAG check ...[2024-12-03 00:41:33.910713] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.436 [2024-12-03 00:41:33.910918] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.436 [2024-12-03 00:41:33.910956] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.436 passed 00:07:21.436 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:21.436 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:21.436 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:21.436 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:21.436 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-12-03 00:41:33.910982] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.436 [2024-12-03 00:41:33.911041] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:21.436 passed 00:07:21.436 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:21.436 Test: generate copy: DIF generated, GUARD check ...passed 00:07:21.436 Test: generate copy: DIF generated, APTTAG check ...[2024-12-03 00:41:33.911288] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:21.436 passed 00:07:21.436 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:21.437 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:21.437 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:21.437 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:21.437 Test: generate copy: iovecs-len validate ...passed 00:07:21.437 Test: generate copy: buffer alignment validate ...passed 00:07:21.437 00:07:21.437 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.437 suites 1 1 n/a 0 0 00:07:21.437 tests 20 20 20 0 0 00:07:21.437 asserts 204 204 204 0 n/a 00:07:21.437 00:07:21.437 Elapsed time = 0.004 seconds 00:07:21.437 [2024-12-03 00:41:33.911782] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:21.695 ************************************ 00:07:21.695 END TEST accel_dif_functional_tests 00:07:21.695 ************************************ 00:07:21.695 00:07:21.695 real 0m0.512s 00:07:21.695 user 0m0.696s 00:07:21.695 sys 0m0.158s 00:07:21.695 00:41:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.695 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.695 ************************************ 00:07:21.695 END TEST accel 00:07:21.695 ************************************ 00:07:21.695 00:07:21.695 real 1m1.158s 00:07:21.695 user 1m5.671s 00:07:21.695 sys 0m6.169s 00:07:21.695 00:41:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.695 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.695 00:41:34 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.695 00:41:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.695 00:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.695 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.695 ************************************ 00:07:21.695 START TEST accel_rpc 00:07:21.695 ************************************ 00:07:21.695 00:41:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.954 * Looking for test storage... 00:07:21.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:21.954 00:41:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:21.954 00:41:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:21.954 00:41:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:21.954 00:41:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:21.954 00:41:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:21.954 00:41:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:21.954 00:41:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:21.954 00:41:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:21.954 00:41:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:21.954 00:41:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.954 00:41:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:21.954 00:41:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:21.954 00:41:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:21.954 00:41:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:21.954 00:41:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:21.954 00:41:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:21.954 00:41:34 -- scripts/common.sh@344 -- # : 1 00:07:21.954 00:41:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:21.954 00:41:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.954 00:41:34 -- scripts/common.sh@364 -- # decimal 1 00:07:21.954 00:41:34 -- scripts/common.sh@352 -- # local d=1 00:07:21.954 00:41:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.954 00:41:34 -- scripts/common.sh@354 -- # echo 1 00:07:21.954 00:41:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:21.954 00:41:34 -- scripts/common.sh@365 -- # decimal 2 00:07:21.954 00:41:34 -- scripts/common.sh@352 -- # local d=2 00:07:21.954 00:41:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.954 00:41:34 -- scripts/common.sh@354 -- # echo 2 00:07:21.954 00:41:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:21.954 00:41:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:21.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.954 00:41:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:21.954 00:41:34 -- scripts/common.sh@367 -- # return 0 00:07:21.954 00:41:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.954 00:41:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.954 --rc genhtml_branch_coverage=1 00:07:21.954 --rc genhtml_function_coverage=1 00:07:21.954 --rc genhtml_legend=1 00:07:21.954 --rc geninfo_all_blocks=1 00:07:21.954 --rc geninfo_unexecuted_blocks=1 00:07:21.954 00:07:21.954 ' 00:07:21.954 00:41:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.954 --rc genhtml_branch_coverage=1 00:07:21.954 --rc genhtml_function_coverage=1 00:07:21.954 --rc genhtml_legend=1 00:07:21.954 --rc geninfo_all_blocks=1 00:07:21.954 --rc geninfo_unexecuted_blocks=1 00:07:21.954 00:07:21.954 ' 00:07:21.955 00:41:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.955 --rc genhtml_branch_coverage=1 00:07:21.955 --rc genhtml_function_coverage=1 00:07:21.955 --rc genhtml_legend=1 00:07:21.955 --rc geninfo_all_blocks=1 00:07:21.955 --rc geninfo_unexecuted_blocks=1 00:07:21.955 00:07:21.955 ' 00:07:21.955 00:41:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.955 --rc genhtml_branch_coverage=1 00:07:21.955 --rc genhtml_function_coverage=1 00:07:21.955 --rc genhtml_legend=1 00:07:21.955 --rc geninfo_all_blocks=1 00:07:21.955 --rc geninfo_unexecuted_blocks=1 00:07:21.955 00:07:21.955 ' 00:07:21.955 00:41:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.955 00:41:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71613 00:07:21.955 00:41:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 71613 00:07:21.955 00:41:34 -- common/autotest_common.sh@829 -- # '[' -z 71613 ']' 00:07:21.955 00:41:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.955 00:41:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:21.955 00:41:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.955 00:41:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.955 00:41:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.955 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.955 [2024-12-03 00:41:34.466462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.955 [2024-12-03 00:41:34.466746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71613 ] 00:07:22.214 [2024-12-03 00:41:34.603641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.214 [2024-12-03 00:41:34.667116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.214 [2024-12-03 00:41:34.667562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.214 00:41:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.214 00:41:34 -- common/autotest_common.sh@862 -- # return 0 00:07:22.214 00:41:34 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:22.214 00:41:34 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:22.214 00:41:34 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:22.214 00:41:34 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:22.214 00:41:34 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:22.214 00:41:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.214 00:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.214 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:22.214 ************************************ 00:07:22.214 START TEST accel_assign_opcode 00:07:22.214 ************************************ 00:07:22.214 00:41:34 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:22.214 00:41:34 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:22.214 00:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.214 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:22.214 [2024-12-03 00:41:34.728164] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:22.473 00:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.473 00:41:34 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:22.473 00:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.473 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:22.473 [2024-12-03 00:41:34.736150] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:22.473 00:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.473 00:41:34 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:22.473 00:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.473 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:22.473 00:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.473 00:41:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:22.473 00:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.473 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:07:22.473 00:41:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:22.473 00:41:34 -- accel/accel_rpc.sh@42 -- # grep software 00:07:22.473 00:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.778 software 00:07:22.778 ************************************ 00:07:22.778 END TEST accel_assign_opcode 00:07:22.778 ************************************ 00:07:22.778 00:07:22.778 real 0m0.284s 00:07:22.778 user 0m0.055s 00:07:22.778 sys 0m0.012s 00:07:22.778 00:41:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.778 00:41:35 -- common/autotest_common.sh@10 -- # set +x 00:07:22.778 00:41:35 -- accel/accel_rpc.sh@55 -- # killprocess 71613 00:07:22.778 00:41:35 -- common/autotest_common.sh@936 -- # '[' -z 71613 ']' 00:07:22.778 00:41:35 -- common/autotest_common.sh@940 -- # kill -0 71613 00:07:22.778 00:41:35 -- common/autotest_common.sh@941 -- # uname 00:07:22.778 00:41:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.778 00:41:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71613 00:07:22.778 killing process with pid 71613 00:07:22.778 00:41:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.778 00:41:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.778 00:41:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71613' 00:07:22.778 00:41:35 -- common/autotest_common.sh@955 -- # kill 71613 00:07:22.778 00:41:35 -- common/autotest_common.sh@960 -- # wait 71613 00:07:23.062 00:07:23.062 real 0m1.223s 00:07:23.062 user 0m1.124s 00:07:23.062 sys 0m0.435s 00:07:23.062 00:41:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.062 00:41:35 -- common/autotest_common.sh@10 -- # set +x 00:07:23.062 ************************************ 00:07:23.062 END TEST accel_rpc 00:07:23.062 ************************************ 00:07:23.062 00:41:35 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.062 00:41:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.062 00:41:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.062 00:41:35 -- common/autotest_common.sh@10 -- # set +x 00:07:23.062 ************************************ 00:07:23.062 START TEST app_cmdline 00:07:23.062 ************************************ 00:07:23.062 00:41:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.062 * Looking for test storage... 00:07:23.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.062 00:41:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.062 00:41:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.062 00:41:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.321 00:41:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.321 00:41:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.321 00:41:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.321 00:41:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.321 00:41:35 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.321 00:41:35 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.321 00:41:35 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.321 00:41:35 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.321 00:41:35 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.321 00:41:35 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.321 00:41:35 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.321 00:41:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.321 00:41:35 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.321 00:41:35 -- scripts/common.sh@344 -- # : 1 00:07:23.321 00:41:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.321 00:41:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.321 00:41:35 -- scripts/common.sh@364 -- # decimal 1 00:07:23.321 00:41:35 -- scripts/common.sh@352 -- # local d=1 00:07:23.321 00:41:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.321 00:41:35 -- scripts/common.sh@354 -- # echo 1 00:07:23.321 00:41:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.321 00:41:35 -- scripts/common.sh@365 -- # decimal 2 00:07:23.321 00:41:35 -- scripts/common.sh@352 -- # local d=2 00:07:23.321 00:41:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.321 00:41:35 -- scripts/common.sh@354 -- # echo 2 00:07:23.321 00:41:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.321 00:41:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.321 00:41:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.321 00:41:35 -- scripts/common.sh@367 -- # return 0 00:07:23.321 00:41:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.321 00:41:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.321 --rc genhtml_branch_coverage=1 00:07:23.321 --rc genhtml_function_coverage=1 00:07:23.321 --rc genhtml_legend=1 00:07:23.321 --rc geninfo_all_blocks=1 00:07:23.321 --rc geninfo_unexecuted_blocks=1 00:07:23.321 00:07:23.321 ' 00:07:23.321 00:41:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.321 --rc genhtml_branch_coverage=1 00:07:23.321 --rc genhtml_function_coverage=1 00:07:23.321 --rc genhtml_legend=1 00:07:23.321 --rc geninfo_all_blocks=1 00:07:23.321 --rc geninfo_unexecuted_blocks=1 00:07:23.321 00:07:23.321 ' 00:07:23.321 00:41:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.321 --rc genhtml_branch_coverage=1 00:07:23.321 --rc genhtml_function_coverage=1 00:07:23.321 --rc genhtml_legend=1 00:07:23.321 --rc geninfo_all_blocks=1 00:07:23.321 --rc geninfo_unexecuted_blocks=1 00:07:23.321 00:07:23.321 ' 00:07:23.321 00:41:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.321 --rc genhtml_branch_coverage=1 00:07:23.321 --rc genhtml_function_coverage=1 00:07:23.321 --rc genhtml_legend=1 00:07:23.321 --rc geninfo_all_blocks=1 00:07:23.321 --rc geninfo_unexecuted_blocks=1 00:07:23.321 00:07:23.321 ' 00:07:23.321 00:41:35 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.321 00:41:35 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71716 00:07:23.321 00:41:35 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.321 00:41:35 -- app/cmdline.sh@18 -- # waitforlisten 71716 00:07:23.321 00:41:35 -- common/autotest_common.sh@829 -- # '[' -z 71716 ']' 00:07:23.321 00:41:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.321 00:41:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.321 00:41:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.321 00:41:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.321 00:41:35 -- common/autotest_common.sh@10 -- # set +x 00:07:23.321 [2024-12-03 00:41:35.702646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.321 [2024-12-03 00:41:35.702751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71716 ] 00:07:23.321 [2024-12-03 00:41:35.833655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.579 [2024-12-03 00:41:35.893493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.579 [2024-12-03 00:41:35.893666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.516 00:41:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.516 00:41:36 -- common/autotest_common.sh@862 -- # return 0 00:07:24.516 00:41:36 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.516 { 00:07:24.516 "fields": { 00:07:24.516 "commit": "c13c99a5e", 00:07:24.516 "major": 24, 00:07:24.516 "minor": 1, 00:07:24.516 "patch": 1, 00:07:24.516 "suffix": "-pre" 00:07:24.516 }, 00:07:24.516 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:24.516 } 00:07:24.516 00:41:36 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.516 00:41:36 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.516 00:41:36 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.516 00:41:36 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.516 00:41:36 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.516 00:41:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.516 00:41:36 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.516 00:41:36 -- common/autotest_common.sh@10 -- # set +x 00:07:24.516 00:41:36 -- app/cmdline.sh@26 -- # sort 00:07:24.516 00:41:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.776 00:41:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.776 00:41:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.776 00:41:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.776 00:41:37 -- common/autotest_common.sh@650 -- # local es=0 00:07:24.776 00:41:37 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.776 00:41:37 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.776 00:41:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.776 00:41:37 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.776 00:41:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.776 00:41:37 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.776 00:41:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.776 00:41:37 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.776 00:41:37 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.776 00:41:37 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.036 2024/12/03 00:41:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:25.036 request: 00:07:25.036 { 00:07:25.036 "method": "env_dpdk_get_mem_stats", 00:07:25.036 "params": {} 00:07:25.036 } 00:07:25.036 Got JSON-RPC error response 00:07:25.036 GoRPCClient: error on JSON-RPC call 00:07:25.036 00:41:37 -- common/autotest_common.sh@653 -- # es=1 00:07:25.036 00:41:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.036 00:41:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.036 00:41:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.036 00:41:37 -- app/cmdline.sh@1 -- # killprocess 71716 00:07:25.036 00:41:37 -- common/autotest_common.sh@936 -- # '[' -z 71716 ']' 00:07:25.036 00:41:37 -- common/autotest_common.sh@940 -- # kill -0 71716 00:07:25.036 00:41:37 -- common/autotest_common.sh@941 -- # uname 00:07:25.036 00:41:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.036 00:41:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71716 00:07:25.036 00:41:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.036 00:41:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.036 killing process with pid 71716 00:07:25.036 00:41:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71716' 00:07:25.036 00:41:37 -- common/autotest_common.sh@955 -- # kill 71716 00:07:25.036 00:41:37 -- common/autotest_common.sh@960 -- # wait 71716 00:07:25.296 00:07:25.296 real 0m2.228s 00:07:25.296 user 0m2.789s 00:07:25.296 sys 0m0.531s 00:07:25.296 00:41:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.296 00:41:37 -- common/autotest_common.sh@10 -- # set +x 00:07:25.296 ************************************ 00:07:25.296 END TEST app_cmdline 00:07:25.296 ************************************ 00:07:25.296 00:41:37 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.296 00:41:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.296 00:41:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.296 00:41:37 -- common/autotest_common.sh@10 -- # set +x 00:07:25.296 ************************************ 00:07:25.296 START TEST version 00:07:25.296 ************************************ 00:07:25.296 00:41:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.556 * Looking for test storage... 00:07:25.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.556 00:41:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.556 00:41:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.556 00:41:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.556 00:41:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.556 00:41:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.556 00:41:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.556 00:41:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.556 00:41:37 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.556 00:41:37 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.556 00:41:37 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.556 00:41:37 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.556 00:41:37 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.556 00:41:37 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.556 00:41:37 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.556 00:41:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.556 00:41:37 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.556 00:41:37 -- scripts/common.sh@344 -- # : 1 00:07:25.556 00:41:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.556 00:41:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.556 00:41:37 -- scripts/common.sh@364 -- # decimal 1 00:07:25.556 00:41:37 -- scripts/common.sh@352 -- # local d=1 00:07:25.556 00:41:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.556 00:41:37 -- scripts/common.sh@354 -- # echo 1 00:07:25.556 00:41:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.556 00:41:37 -- scripts/common.sh@365 -- # decimal 2 00:07:25.556 00:41:37 -- scripts/common.sh@352 -- # local d=2 00:07:25.556 00:41:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.556 00:41:37 -- scripts/common.sh@354 -- # echo 2 00:07:25.556 00:41:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.557 00:41:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.557 00:41:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.557 00:41:37 -- scripts/common.sh@367 -- # return 0 00:07:25.557 00:41:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.557 00:41:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.557 --rc genhtml_branch_coverage=1 00:07:25.557 --rc genhtml_function_coverage=1 00:07:25.557 --rc genhtml_legend=1 00:07:25.557 --rc geninfo_all_blocks=1 00:07:25.557 --rc geninfo_unexecuted_blocks=1 00:07:25.557 00:07:25.557 ' 00:07:25.557 00:41:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.557 --rc genhtml_branch_coverage=1 00:07:25.557 --rc genhtml_function_coverage=1 00:07:25.557 --rc genhtml_legend=1 00:07:25.557 --rc geninfo_all_blocks=1 00:07:25.557 --rc geninfo_unexecuted_blocks=1 00:07:25.557 00:07:25.557 ' 00:07:25.557 00:41:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.557 --rc genhtml_branch_coverage=1 00:07:25.557 --rc genhtml_function_coverage=1 00:07:25.557 --rc genhtml_legend=1 00:07:25.557 --rc geninfo_all_blocks=1 00:07:25.557 --rc geninfo_unexecuted_blocks=1 00:07:25.557 00:07:25.557 ' 00:07:25.557 00:41:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.557 --rc genhtml_branch_coverage=1 00:07:25.557 --rc genhtml_function_coverage=1 00:07:25.557 --rc genhtml_legend=1 00:07:25.557 --rc geninfo_all_blocks=1 00:07:25.557 --rc geninfo_unexecuted_blocks=1 00:07:25.557 00:07:25.557 ' 00:07:25.557 00:41:37 -- app/version.sh@17 -- # get_header_version major 00:07:25.557 00:41:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.557 00:41:37 -- app/version.sh@14 -- # cut -f2 00:07:25.557 00:41:37 -- app/version.sh@14 -- # tr -d '"' 00:07:25.557 00:41:37 -- app/version.sh@17 -- # major=24 00:07:25.557 00:41:37 -- app/version.sh@18 -- # get_header_version minor 00:07:25.557 00:41:37 -- app/version.sh@14 -- # cut -f2 00:07:25.557 00:41:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.557 00:41:37 -- app/version.sh@14 -- # tr -d '"' 00:07:25.557 00:41:37 -- app/version.sh@18 -- # minor=1 00:07:25.557 00:41:37 -- app/version.sh@19 -- # get_header_version patch 00:07:25.557 00:41:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.557 00:41:37 -- app/version.sh@14 -- # cut -f2 00:07:25.557 00:41:37 -- app/version.sh@14 -- # tr -d '"' 00:07:25.557 00:41:37 -- app/version.sh@19 -- # patch=1 00:07:25.557 00:41:37 -- app/version.sh@20 -- # get_header_version suffix 00:07:25.557 00:41:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.557 00:41:37 -- app/version.sh@14 -- # tr -d '"' 00:07:25.557 00:41:37 -- app/version.sh@14 -- # cut -f2 00:07:25.557 00:41:37 -- app/version.sh@20 -- # suffix=-pre 00:07:25.557 00:41:37 -- app/version.sh@22 -- # version=24.1 00:07:25.557 00:41:37 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.557 00:41:37 -- app/version.sh@25 -- # version=24.1.1 00:07:25.557 00:41:37 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:25.557 00:41:37 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:25.557 00:41:37 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.557 00:41:38 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:25.557 00:41:38 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:25.557 00:07:25.557 real 0m0.258s 00:07:25.557 user 0m0.172s 00:07:25.557 sys 0m0.129s 00:07:25.557 00:41:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.557 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:25.557 ************************************ 00:07:25.557 END TEST version 00:07:25.557 ************************************ 00:07:25.557 00:41:38 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:25.557 00:41:38 -- spdk/autotest.sh@191 -- # uname -s 00:07:25.817 00:41:38 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:25.817 00:41:38 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.817 00:41:38 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.817 00:41:38 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:25.817 00:41:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.817 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:25.817 00:41:38 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:25.817 00:41:38 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:25.817 00:41:38 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.817 00:41:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.817 00:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.817 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:25.817 ************************************ 00:07:25.817 START TEST nvmf_tcp 00:07:25.817 ************************************ 00:07:25.817 00:41:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.817 * Looking for test storage... 00:07:25.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:25.817 00:41:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.817 00:41:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.817 00:41:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.817 00:41:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.817 00:41:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.817 00:41:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.817 00:41:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.817 00:41:38 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.817 00:41:38 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.817 00:41:38 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.817 00:41:38 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.817 00:41:38 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.817 00:41:38 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.817 00:41:38 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.817 00:41:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.817 00:41:38 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.817 00:41:38 -- scripts/common.sh@344 -- # : 1 00:07:25.817 00:41:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.817 00:41:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.817 00:41:38 -- scripts/common.sh@364 -- # decimal 1 00:07:25.817 00:41:38 -- scripts/common.sh@352 -- # local d=1 00:07:25.817 00:41:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.817 00:41:38 -- scripts/common.sh@354 -- # echo 1 00:07:25.817 00:41:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.817 00:41:38 -- scripts/common.sh@365 -- # decimal 2 00:07:25.817 00:41:38 -- scripts/common.sh@352 -- # local d=2 00:07:25.817 00:41:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.817 00:41:38 -- scripts/common.sh@354 -- # echo 2 00:07:25.817 00:41:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.817 00:41:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.817 00:41:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.817 00:41:38 -- scripts/common.sh@367 -- # return 0 00:07:25.817 00:41:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.817 00:41:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.817 --rc genhtml_branch_coverage=1 00:07:25.817 --rc genhtml_function_coverage=1 00:07:25.817 --rc genhtml_legend=1 00:07:25.817 --rc geninfo_all_blocks=1 00:07:25.817 --rc geninfo_unexecuted_blocks=1 00:07:25.817 00:07:25.817 ' 00:07:25.817 00:41:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.817 --rc genhtml_branch_coverage=1 00:07:25.817 --rc genhtml_function_coverage=1 00:07:25.817 --rc genhtml_legend=1 00:07:25.817 --rc geninfo_all_blocks=1 00:07:25.817 --rc geninfo_unexecuted_blocks=1 00:07:25.817 00:07:25.817 ' 00:07:25.817 00:41:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.817 --rc genhtml_branch_coverage=1 00:07:25.817 --rc genhtml_function_coverage=1 00:07:25.817 --rc genhtml_legend=1 00:07:25.817 --rc geninfo_all_blocks=1 00:07:25.817 --rc geninfo_unexecuted_blocks=1 00:07:25.817 00:07:25.817 ' 00:07:25.817 00:41:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.817 --rc genhtml_branch_coverage=1 00:07:25.817 --rc genhtml_function_coverage=1 00:07:25.817 --rc genhtml_legend=1 00:07:25.817 --rc geninfo_all_blocks=1 00:07:25.817 --rc geninfo_unexecuted_blocks=1 00:07:25.817 00:07:25.817 ' 00:07:25.817 00:41:38 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.817 00:41:38 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.818 00:41:38 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.818 00:41:38 -- nvmf/common.sh@7 -- # uname -s 00:07:25.818 00:41:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.818 00:41:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.818 00:41:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.818 00:41:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.818 00:41:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.818 00:41:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.818 00:41:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.818 00:41:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.818 00:41:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.818 00:41:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.818 00:41:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:07:25.818 00:41:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:07:25.818 00:41:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.818 00:41:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.818 00:41:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.818 00:41:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.818 00:41:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.818 00:41:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.818 00:41:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.818 00:41:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.818 00:41:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.818 00:41:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.818 00:41:38 -- paths/export.sh@5 -- # export PATH 00:07:25.818 00:41:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.818 00:41:38 -- nvmf/common.sh@46 -- # : 0 00:07:25.818 00:41:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:25.818 00:41:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:25.818 00:41:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:25.818 00:41:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.818 00:41:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.818 00:41:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:25.818 00:41:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:25.818 00:41:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.078 00:41:38 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:26.078 00:41:38 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:26.078 00:41:38 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:26.078 00:41:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.078 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:26.078 00:41:38 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:26.078 00:41:38 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.078 00:41:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.078 00:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.078 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:26.078 ************************************ 00:07:26.078 START TEST nvmf_example 00:07:26.078 ************************************ 00:07:26.078 00:41:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.078 * Looking for test storage... 00:07:26.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.078 00:41:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.078 00:41:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.078 00:41:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.078 00:41:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.078 00:41:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.078 00:41:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.078 00:41:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.078 00:41:38 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.078 00:41:38 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.078 00:41:38 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.078 00:41:38 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.078 00:41:38 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.078 00:41:38 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.078 00:41:38 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.078 00:41:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.078 00:41:38 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.078 00:41:38 -- scripts/common.sh@344 -- # : 1 00:07:26.078 00:41:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.078 00:41:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.078 00:41:38 -- scripts/common.sh@364 -- # decimal 1 00:07:26.078 00:41:38 -- scripts/common.sh@352 -- # local d=1 00:07:26.078 00:41:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.078 00:41:38 -- scripts/common.sh@354 -- # echo 1 00:07:26.078 00:41:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.078 00:41:38 -- scripts/common.sh@365 -- # decimal 2 00:07:26.078 00:41:38 -- scripts/common.sh@352 -- # local d=2 00:07:26.078 00:41:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.078 00:41:38 -- scripts/common.sh@354 -- # echo 2 00:07:26.078 00:41:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.078 00:41:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.078 00:41:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.078 00:41:38 -- scripts/common.sh@367 -- # return 0 00:07:26.078 00:41:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.078 00:41:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.078 --rc genhtml_branch_coverage=1 00:07:26.078 --rc genhtml_function_coverage=1 00:07:26.078 --rc genhtml_legend=1 00:07:26.078 --rc geninfo_all_blocks=1 00:07:26.078 --rc geninfo_unexecuted_blocks=1 00:07:26.078 00:07:26.078 ' 00:07:26.078 00:41:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.078 --rc genhtml_branch_coverage=1 00:07:26.078 --rc genhtml_function_coverage=1 00:07:26.078 --rc genhtml_legend=1 00:07:26.078 --rc geninfo_all_blocks=1 00:07:26.078 --rc geninfo_unexecuted_blocks=1 00:07:26.078 00:07:26.078 ' 00:07:26.078 00:41:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.078 --rc genhtml_branch_coverage=1 00:07:26.078 --rc genhtml_function_coverage=1 00:07:26.078 --rc genhtml_legend=1 00:07:26.078 --rc geninfo_all_blocks=1 00:07:26.078 --rc geninfo_unexecuted_blocks=1 00:07:26.078 00:07:26.078 ' 00:07:26.078 00:41:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.078 --rc genhtml_branch_coverage=1 00:07:26.078 --rc genhtml_function_coverage=1 00:07:26.078 --rc genhtml_legend=1 00:07:26.078 --rc geninfo_all_blocks=1 00:07:26.078 --rc geninfo_unexecuted_blocks=1 00:07:26.078 00:07:26.078 ' 00:07:26.079 00:41:38 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.079 00:41:38 -- nvmf/common.sh@7 -- # uname -s 00:07:26.079 00:41:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.079 00:41:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.079 00:41:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.079 00:41:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.079 00:41:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.079 00:41:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.079 00:41:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.079 00:41:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.079 00:41:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.079 00:41:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.079 00:41:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:07:26.079 00:41:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:07:26.079 00:41:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.079 00:41:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.079 00:41:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.079 00:41:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.079 00:41:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.079 00:41:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.079 00:41:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.079 00:41:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.079 00:41:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.079 00:41:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.079 00:41:38 -- paths/export.sh@5 -- # export PATH 00:07:26.079 00:41:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.079 00:41:38 -- nvmf/common.sh@46 -- # : 0 00:07:26.079 00:41:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.079 00:41:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.079 00:41:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.079 00:41:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.079 00:41:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.079 00:41:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.079 00:41:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.079 00:41:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.079 00:41:38 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:26.079 00:41:38 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:26.079 00:41:38 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:26.079 00:41:38 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:26.079 00:41:38 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:26.079 00:41:38 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:26.079 00:41:38 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:26.079 00:41:38 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:26.079 00:41:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.079 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:26.079 00:41:38 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:26.079 00:41:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:26.079 00:41:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.079 00:41:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:26.079 00:41:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:26.079 00:41:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:26.079 00:41:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.079 00:41:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.079 00:41:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.079 00:41:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:26.079 00:41:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:26.079 00:41:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:26.079 00:41:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:26.079 00:41:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:26.079 00:41:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:26.079 00:41:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.079 00:41:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.079 00:41:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.079 00:41:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:26.079 00:41:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.079 00:41:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.079 00:41:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.079 00:41:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.079 00:41:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.079 00:41:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.079 00:41:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.079 00:41:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.079 00:41:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:26.079 Cannot find device "nvmf_init_br" 00:07:26.079 00:41:38 -- nvmf/common.sh@153 -- # true 00:07:26.079 00:41:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:26.079 Cannot find device "nvmf_tgt_br" 00:07:26.079 00:41:38 -- nvmf/common.sh@154 -- # true 00:07:26.079 00:41:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.079 Cannot find device "nvmf_tgt_br2" 00:07:26.079 00:41:38 -- nvmf/common.sh@155 -- # true 00:07:26.079 00:41:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:26.079 Cannot find device "nvmf_init_br" 00:07:26.079 00:41:38 -- nvmf/common.sh@156 -- # true 00:07:26.079 00:41:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:26.339 Cannot find device "nvmf_tgt_br" 00:07:26.339 00:41:38 -- nvmf/common.sh@157 -- # true 00:07:26.339 00:41:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:26.339 Cannot find device "nvmf_tgt_br2" 00:07:26.339 00:41:38 -- nvmf/common.sh@158 -- # true 00:07:26.339 00:41:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:26.339 Cannot find device "nvmf_br" 00:07:26.339 00:41:38 -- nvmf/common.sh@159 -- # true 00:07:26.339 00:41:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:26.339 Cannot find device "nvmf_init_if" 00:07:26.339 00:41:38 -- nvmf/common.sh@160 -- # true 00:07:26.339 00:41:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.339 00:41:38 -- nvmf/common.sh@161 -- # true 00:07:26.339 00:41:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.339 00:41:38 -- nvmf/common.sh@162 -- # true 00:07:26.339 00:41:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.339 00:41:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.339 00:41:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.339 00:41:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.339 00:41:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.339 00:41:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.339 00:41:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.339 00:41:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.339 00:41:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.339 00:41:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:26.339 00:41:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:26.339 00:41:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:26.339 00:41:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:26.339 00:41:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.339 00:41:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.339 00:41:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.339 00:41:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:26.339 00:41:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:26.339 00:41:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.598 00:41:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.598 00:41:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.598 00:41:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.598 00:41:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.598 00:41:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:26.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:07:26.598 00:07:26.598 --- 10.0.0.2 ping statistics --- 00:07:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.598 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:07:26.598 00:41:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:26.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:07:26.598 00:07:26.598 --- 10.0.0.3 ping statistics --- 00:07:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.598 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:26.598 00:41:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:26.598 00:07:26.598 --- 10.0.0.1 ping statistics --- 00:07:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.598 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:26.598 00:41:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.598 00:41:38 -- nvmf/common.sh@421 -- # return 0 00:07:26.598 00:41:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:26.598 00:41:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.598 00:41:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:26.598 00:41:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:26.598 00:41:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.598 00:41:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:26.598 00:41:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:26.598 00:41:38 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:26.598 00:41:38 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:26.598 00:41:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.598 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:26.598 00:41:38 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:26.598 00:41:38 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:26.598 00:41:38 -- target/nvmf_example.sh@34 -- # nvmfpid=72100 00:07:26.598 00:41:38 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.598 00:41:38 -- target/nvmf_example.sh@36 -- # waitforlisten 72100 00:07:26.598 00:41:38 -- common/autotest_common.sh@829 -- # '[' -z 72100 ']' 00:07:26.598 00:41:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.598 00:41:38 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:26.598 00:41:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.598 00:41:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.598 00:41:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.598 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.537 00:41:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.537 00:41:40 -- common/autotest_common.sh@862 -- # return 0 00:07:27.537 00:41:40 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:27.537 00:41:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.537 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 00:41:40 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.796 00:41:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.796 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 00:41:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.796 00:41:40 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:27.796 00:41:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.796 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 00:41:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.796 00:41:40 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:27.796 00:41:40 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.796 00:41:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.796 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 00:41:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.796 00:41:40 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:27.796 00:41:40 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.796 00:41:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.796 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 00:41:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.796 00:41:40 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.796 00:41:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.796 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 00:41:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.796 00:41:40 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:27.796 00:41:40 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:40.007 Initializing NVMe Controllers 00:07:40.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:40.007 Initialization complete. Launching workers. 00:07:40.007 ======================================================== 00:07:40.007 Latency(us) 00:07:40.007 Device Information : IOPS MiB/s Average min max 00:07:40.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16821.19 65.71 3804.58 631.33 22134.42 00:07:40.007 ======================================================== 00:07:40.007 Total : 16821.19 65.71 3804.58 631.33 22134.42 00:07:40.007 00:07:40.007 00:41:50 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:40.007 00:41:50 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:40.007 00:41:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:40.007 00:41:50 -- nvmf/common.sh@116 -- # sync 00:07:40.007 00:41:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:40.007 00:41:50 -- nvmf/common.sh@119 -- # set +e 00:07:40.007 00:41:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:40.007 00:41:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:40.007 rmmod nvme_tcp 00:07:40.007 rmmod nvme_fabrics 00:07:40.007 rmmod nvme_keyring 00:07:40.007 00:41:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:40.007 00:41:50 -- nvmf/common.sh@123 -- # set -e 00:07:40.007 00:41:50 -- nvmf/common.sh@124 -- # return 0 00:07:40.007 00:41:50 -- nvmf/common.sh@477 -- # '[' -n 72100 ']' 00:07:40.007 00:41:50 -- nvmf/common.sh@478 -- # killprocess 72100 00:07:40.007 00:41:50 -- common/autotest_common.sh@936 -- # '[' -z 72100 ']' 00:07:40.007 00:41:50 -- common/autotest_common.sh@940 -- # kill -0 72100 00:07:40.007 00:41:50 -- common/autotest_common.sh@941 -- # uname 00:07:40.007 00:41:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.007 00:41:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72100 00:07:40.007 killing process with pid 72100 00:07:40.007 00:41:50 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:40.007 00:41:50 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:40.007 00:41:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72100' 00:07:40.007 00:41:50 -- common/autotest_common.sh@955 -- # kill 72100 00:07:40.007 00:41:50 -- common/autotest_common.sh@960 -- # wait 72100 00:07:40.007 nvmf threads initialize successfully 00:07:40.007 bdev subsystem init successfully 00:07:40.007 created a nvmf target service 00:07:40.007 create targets's poll groups done 00:07:40.007 all subsystems of target started 00:07:40.007 nvmf target is running 00:07:40.007 all subsystems of target stopped 00:07:40.007 destroy targets's poll groups done 00:07:40.007 destroyed the nvmf target service 00:07:40.007 bdev subsystem finish successfully 00:07:40.007 nvmf threads destroy successfully 00:07:40.007 00:41:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:40.007 00:41:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:40.007 00:41:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:40.007 00:41:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.007 00:41:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:40.007 00:41:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.007 00:41:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.007 00:41:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.007 00:41:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:40.007 00:41:50 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:40.007 00:41:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.007 00:41:50 -- common/autotest_common.sh@10 -- # set +x 00:07:40.007 00:07:40.007 real 0m12.450s 00:07:40.007 user 0m44.621s 00:07:40.007 sys 0m2.050s 00:07:40.007 00:41:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.007 ************************************ 00:07:40.007 END TEST nvmf_example 00:07:40.007 ************************************ 00:07:40.007 00:41:50 -- common/autotest_common.sh@10 -- # set +x 00:07:40.007 00:41:50 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:40.007 00:41:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:40.007 00:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.007 00:41:50 -- common/autotest_common.sh@10 -- # set +x 00:07:40.007 ************************************ 00:07:40.007 START TEST nvmf_filesystem 00:07:40.007 ************************************ 00:07:40.007 00:41:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:40.007 * Looking for test storage... 00:07:40.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.007 00:41:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.007 00:41:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.007 00:41:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.007 00:41:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.007 00:41:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.007 00:41:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.007 00:41:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.007 00:41:50 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.007 00:41:50 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.007 00:41:50 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.007 00:41:50 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.007 00:41:50 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.007 00:41:50 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.007 00:41:50 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.007 00:41:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.007 00:41:50 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.007 00:41:50 -- scripts/common.sh@344 -- # : 1 00:07:40.007 00:41:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.007 00:41:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.007 00:41:50 -- scripts/common.sh@364 -- # decimal 1 00:07:40.007 00:41:50 -- scripts/common.sh@352 -- # local d=1 00:07:40.007 00:41:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.007 00:41:51 -- scripts/common.sh@354 -- # echo 1 00:07:40.007 00:41:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.007 00:41:51 -- scripts/common.sh@365 -- # decimal 2 00:07:40.007 00:41:51 -- scripts/common.sh@352 -- # local d=2 00:07:40.007 00:41:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.007 00:41:51 -- scripts/common.sh@354 -- # echo 2 00:07:40.007 00:41:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.007 00:41:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.008 00:41:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.008 00:41:51 -- scripts/common.sh@367 -- # return 0 00:07:40.008 00:41:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.008 00:41:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.008 --rc genhtml_branch_coverage=1 00:07:40.008 --rc genhtml_function_coverage=1 00:07:40.008 --rc genhtml_legend=1 00:07:40.008 --rc geninfo_all_blocks=1 00:07:40.008 --rc geninfo_unexecuted_blocks=1 00:07:40.008 00:07:40.008 ' 00:07:40.008 00:41:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.008 --rc genhtml_branch_coverage=1 00:07:40.008 --rc genhtml_function_coverage=1 00:07:40.008 --rc genhtml_legend=1 00:07:40.008 --rc geninfo_all_blocks=1 00:07:40.008 --rc geninfo_unexecuted_blocks=1 00:07:40.008 00:07:40.008 ' 00:07:40.008 00:41:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.008 --rc genhtml_branch_coverage=1 00:07:40.008 --rc genhtml_function_coverage=1 00:07:40.008 --rc genhtml_legend=1 00:07:40.008 --rc geninfo_all_blocks=1 00:07:40.008 --rc geninfo_unexecuted_blocks=1 00:07:40.008 00:07:40.008 ' 00:07:40.008 00:41:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.008 --rc genhtml_branch_coverage=1 00:07:40.008 --rc genhtml_function_coverage=1 00:07:40.008 --rc genhtml_legend=1 00:07:40.008 --rc geninfo_all_blocks=1 00:07:40.008 --rc geninfo_unexecuted_blocks=1 00:07:40.008 00:07:40.008 ' 00:07:40.008 00:41:51 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:40.008 00:41:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:40.008 00:41:51 -- common/autotest_common.sh@34 -- # set -e 00:07:40.008 00:41:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:40.008 00:41:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:40.008 00:41:51 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:40.008 00:41:51 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:40.008 00:41:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:40.008 00:41:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:40.008 00:41:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:40.008 00:41:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:40.008 00:41:51 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:40.008 00:41:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:40.008 00:41:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:40.008 00:41:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:40.008 00:41:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:40.008 00:41:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:40.008 00:41:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:40.008 00:41:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:40.008 00:41:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:40.008 00:41:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:40.008 00:41:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:40.008 00:41:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:40.008 00:41:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:40.008 00:41:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:40.008 00:41:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:40.008 00:41:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:40.008 00:41:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:40.008 00:41:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:40.008 00:41:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:40.008 00:41:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:40.008 00:41:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:40.008 00:41:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:40.008 00:41:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:40.008 00:41:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:40.008 00:41:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:40.008 00:41:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:40.008 00:41:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:40.008 00:41:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:40.008 00:41:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:40.008 00:41:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:40.008 00:41:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:40.008 00:41:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:40.008 00:41:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:40.008 00:41:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:40.008 00:41:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:40.008 00:41:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:40.008 00:41:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:40.008 00:41:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:40.008 00:41:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:40.008 00:41:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:40.008 00:41:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:40.008 00:41:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:40.008 00:41:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:40.008 00:41:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:40.008 00:41:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:40.008 00:41:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:40.008 00:41:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:40.008 00:41:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:40.008 00:41:51 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:40.008 00:41:51 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:40.008 00:41:51 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:40.008 00:41:51 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:40.008 00:41:51 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:40.008 00:41:51 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:40.008 00:41:51 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:40.008 00:41:51 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:40.008 00:41:51 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.008 00:41:51 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:40.008 00:41:51 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:40.008 00:41:51 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:40.008 00:41:51 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:40.008 00:41:51 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:40.008 00:41:51 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:40.008 00:41:51 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:40.008 00:41:51 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:40.008 00:41:51 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:40.008 00:41:51 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:40.008 00:41:51 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:40.008 00:41:51 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:40.008 00:41:51 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:40.009 00:41:51 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:40.009 00:41:51 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:40.009 00:41:51 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:40.009 00:41:51 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:40.009 00:41:51 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:40.009 00:41:51 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:40.009 00:41:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:40.009 00:41:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:40.009 00:41:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:40.009 00:41:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:40.009 00:41:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.009 00:41:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:40.009 00:41:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.009 00:41:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:40.009 00:41:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:40.009 00:41:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:40.009 00:41:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:40.009 00:41:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:40.009 00:41:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:40.009 00:41:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:40.009 00:41:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:40.009 #define SPDK_CONFIG_H 00:07:40.009 #define SPDK_CONFIG_APPS 1 00:07:40.009 #define SPDK_CONFIG_ARCH native 00:07:40.009 #undef SPDK_CONFIG_ASAN 00:07:40.009 #define SPDK_CONFIG_AVAHI 1 00:07:40.009 #undef SPDK_CONFIG_CET 00:07:40.009 #define SPDK_CONFIG_COVERAGE 1 00:07:40.009 #define SPDK_CONFIG_CROSS_PREFIX 00:07:40.009 #undef SPDK_CONFIG_CRYPTO 00:07:40.009 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:40.009 #undef SPDK_CONFIG_CUSTOMOCF 00:07:40.009 #undef SPDK_CONFIG_DAOS 00:07:40.009 #define SPDK_CONFIG_DAOS_DIR 00:07:40.009 #define SPDK_CONFIG_DEBUG 1 00:07:40.009 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:40.009 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:40.009 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:40.009 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.009 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:40.009 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:40.009 #define SPDK_CONFIG_EXAMPLES 1 00:07:40.009 #undef SPDK_CONFIG_FC 00:07:40.009 #define SPDK_CONFIG_FC_PATH 00:07:40.009 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:40.009 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:40.009 #undef SPDK_CONFIG_FUSE 00:07:40.009 #undef SPDK_CONFIG_FUZZER 00:07:40.009 #define SPDK_CONFIG_FUZZER_LIB 00:07:40.009 #define SPDK_CONFIG_GOLANG 1 00:07:40.009 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:40.009 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:40.009 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:40.009 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:40.009 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:40.009 #define SPDK_CONFIG_IDXD 1 00:07:40.009 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:40.009 #undef SPDK_CONFIG_IPSEC_MB 00:07:40.009 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:40.009 #define SPDK_CONFIG_ISAL 1 00:07:40.009 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:40.009 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:40.009 #define SPDK_CONFIG_LIBDIR 00:07:40.009 #undef SPDK_CONFIG_LTO 00:07:40.009 #define SPDK_CONFIG_MAX_LCORES 00:07:40.009 #define SPDK_CONFIG_NVME_CUSE 1 00:07:40.009 #undef SPDK_CONFIG_OCF 00:07:40.009 #define SPDK_CONFIG_OCF_PATH 00:07:40.009 #define SPDK_CONFIG_OPENSSL_PATH 00:07:40.009 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:40.009 #undef SPDK_CONFIG_PGO_USE 00:07:40.009 #define SPDK_CONFIG_PREFIX /usr/local 00:07:40.009 #undef SPDK_CONFIG_RAID5F 00:07:40.009 #undef SPDK_CONFIG_RBD 00:07:40.009 #define SPDK_CONFIG_RDMA 1 00:07:40.009 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:40.009 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:40.009 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:40.009 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:40.009 #define SPDK_CONFIG_SHARED 1 00:07:40.009 #undef SPDK_CONFIG_SMA 00:07:40.009 #define SPDK_CONFIG_TESTS 1 00:07:40.009 #undef SPDK_CONFIG_TSAN 00:07:40.009 #define SPDK_CONFIG_UBLK 1 00:07:40.009 #define SPDK_CONFIG_UBSAN 1 00:07:40.009 #undef SPDK_CONFIG_UNIT_TESTS 00:07:40.009 #undef SPDK_CONFIG_URING 00:07:40.009 #define SPDK_CONFIG_URING_PATH 00:07:40.009 #undef SPDK_CONFIG_URING_ZNS 00:07:40.009 #define SPDK_CONFIG_USDT 1 00:07:40.009 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:40.009 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:40.009 #undef SPDK_CONFIG_VFIO_USER 00:07:40.009 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:40.009 #define SPDK_CONFIG_VHOST 1 00:07:40.009 #define SPDK_CONFIG_VIRTIO 1 00:07:40.009 #undef SPDK_CONFIG_VTUNE 00:07:40.009 #define SPDK_CONFIG_VTUNE_DIR 00:07:40.009 #define SPDK_CONFIG_WERROR 1 00:07:40.009 #define SPDK_CONFIG_WPDK_DIR 00:07:40.009 #undef SPDK_CONFIG_XNVME 00:07:40.009 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:40.009 00:41:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:40.009 00:41:51 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.009 00:41:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.009 00:41:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.009 00:41:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.009 00:41:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.009 00:41:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.009 00:41:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.009 00:41:51 -- paths/export.sh@5 -- # export PATH 00:07:40.009 00:41:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.009 00:41:51 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:40.009 00:41:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:40.009 00:41:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:40.010 00:41:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:40.010 00:41:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:40.010 00:41:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:40.010 00:41:51 -- pm/common@16 -- # TEST_TAG=N/A 00:07:40.010 00:41:51 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:40.010 00:41:51 -- common/autotest_common.sh@52 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:40.010 00:41:51 -- common/autotest_common.sh@56 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:40.010 00:41:51 -- common/autotest_common.sh@58 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:40.010 00:41:51 -- common/autotest_common.sh@60 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:40.010 00:41:51 -- common/autotest_common.sh@62 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:40.010 00:41:51 -- common/autotest_common.sh@64 -- # : 00:07:40.010 00:41:51 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:40.010 00:41:51 -- common/autotest_common.sh@66 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:40.010 00:41:51 -- common/autotest_common.sh@68 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:40.010 00:41:51 -- common/autotest_common.sh@70 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:40.010 00:41:51 -- common/autotest_common.sh@72 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:40.010 00:41:51 -- common/autotest_common.sh@74 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:40.010 00:41:51 -- common/autotest_common.sh@76 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:40.010 00:41:51 -- common/autotest_common.sh@78 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:40.010 00:41:51 -- common/autotest_common.sh@80 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:40.010 00:41:51 -- common/autotest_common.sh@82 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:40.010 00:41:51 -- common/autotest_common.sh@84 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:40.010 00:41:51 -- common/autotest_common.sh@86 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:40.010 00:41:51 -- common/autotest_common.sh@88 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:40.010 00:41:51 -- common/autotest_common.sh@90 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:40.010 00:41:51 -- common/autotest_common.sh@92 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:40.010 00:41:51 -- common/autotest_common.sh@94 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:40.010 00:41:51 -- common/autotest_common.sh@96 -- # : tcp 00:07:40.010 00:41:51 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:40.010 00:41:51 -- common/autotest_common.sh@98 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:40.010 00:41:51 -- common/autotest_common.sh@100 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:40.010 00:41:51 -- common/autotest_common.sh@102 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:40.010 00:41:51 -- common/autotest_common.sh@104 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:40.010 00:41:51 -- common/autotest_common.sh@106 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:40.010 00:41:51 -- common/autotest_common.sh@108 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:40.010 00:41:51 -- common/autotest_common.sh@110 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:40.010 00:41:51 -- common/autotest_common.sh@112 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:40.010 00:41:51 -- common/autotest_common.sh@114 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:40.010 00:41:51 -- common/autotest_common.sh@116 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:40.010 00:41:51 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:40.010 00:41:51 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:40.010 00:41:51 -- common/autotest_common.sh@120 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:40.010 00:41:51 -- common/autotest_common.sh@122 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:40.010 00:41:51 -- common/autotest_common.sh@124 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:40.010 00:41:51 -- common/autotest_common.sh@126 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:40.010 00:41:51 -- common/autotest_common.sh@128 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:40.010 00:41:51 -- common/autotest_common.sh@130 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:40.010 00:41:51 -- common/autotest_common.sh@132 -- # : v23.11 00:07:40.010 00:41:51 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:40.010 00:41:51 -- common/autotest_common.sh@134 -- # : true 00:07:40.010 00:41:51 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:40.010 00:41:51 -- common/autotest_common.sh@136 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:40.010 00:41:51 -- common/autotest_common.sh@138 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:40.010 00:41:51 -- common/autotest_common.sh@140 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:40.010 00:41:51 -- common/autotest_common.sh@142 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:40.010 00:41:51 -- common/autotest_common.sh@144 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:40.010 00:41:51 -- common/autotest_common.sh@146 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:40.010 00:41:51 -- common/autotest_common.sh@148 -- # : 00:07:40.010 00:41:51 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:40.010 00:41:51 -- common/autotest_common.sh@150 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:40.010 00:41:51 -- common/autotest_common.sh@152 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:40.010 00:41:51 -- common/autotest_common.sh@154 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:40.010 00:41:51 -- common/autotest_common.sh@156 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:40.010 00:41:51 -- common/autotest_common.sh@158 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:40.010 00:41:51 -- common/autotest_common.sh@160 -- # : 0 00:07:40.010 00:41:51 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:40.010 00:41:51 -- common/autotest_common.sh@163 -- # : 00:07:40.010 00:41:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:40.010 00:41:51 -- common/autotest_common.sh@165 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:40.010 00:41:51 -- common/autotest_common.sh@167 -- # : 1 00:07:40.010 00:41:51 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:40.011 00:41:51 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.011 00:41:51 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.011 00:41:51 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.011 00:41:51 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:40.011 00:41:51 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:40.011 00:41:51 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:40.011 00:41:51 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:40.011 00:41:51 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.011 00:41:51 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.011 00:41:51 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.011 00:41:51 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.011 00:41:51 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:40.011 00:41:51 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:40.011 00:41:51 -- common/autotest_common.sh@196 -- # cat 00:07:40.011 00:41:51 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:40.011 00:41:51 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.011 00:41:51 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.011 00:41:51 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.011 00:41:51 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.011 00:41:51 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:40.011 00:41:51 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:40.011 00:41:51 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.011 00:41:51 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.011 00:41:51 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.011 00:41:51 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.011 00:41:51 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.011 00:41:51 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.011 00:41:51 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.011 00:41:51 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.011 00:41:51 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:40.011 00:41:51 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:40.011 00:41:51 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.011 00:41:51 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.011 00:41:51 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:40.011 00:41:51 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:40.011 00:41:51 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:40.011 00:41:51 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:40.011 00:41:51 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:40.011 00:41:51 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:40.011 00:41:51 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:40.011 00:41:51 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:40.011 00:41:51 -- common/autotest_common.sh@259 -- # valgrind= 00:07:40.011 00:41:51 -- common/autotest_common.sh@265 -- # uname -s 00:07:40.011 00:41:51 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:40.011 00:41:51 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:40.011 00:41:51 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:40.011 00:41:51 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:40.011 00:41:51 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:40.011 00:41:51 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:40.011 00:41:51 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:40.011 00:41:51 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:40.011 00:41:51 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:40.011 00:41:51 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:40.011 00:41:51 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:40.011 00:41:51 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:40.011 00:41:51 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:40.011 00:41:51 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:40.011 00:41:51 -- common/autotest_common.sh@319 -- # [[ -z 72342 ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@319 -- # kill -0 72342 00:07:40.011 00:41:51 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:40.011 00:41:51 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:40.011 00:41:51 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:40.011 00:41:51 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:40.011 00:41:51 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:40.011 00:41:51 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:40.011 00:41:51 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:40.011 00:41:51 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.fU1Vxd 00:07:40.011 00:41:51 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:40.011 00:41:51 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:40.011 00:41:51 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.fU1Vxd/tests/target /tmp/spdk.fU1Vxd 00:07:40.011 00:41:51 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:40.011 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.011 00:41:51 -- common/autotest_common.sh@328 -- # df -T 00:07:40.011 00:41:51 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293785088 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289764352 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265171968 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293785088 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289764352 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266290176 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253273600 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253285888 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:40.012 00:41:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=98360320000 00:07:40.012 00:41:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:40.012 00:41:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=1342459904 00:07:40.012 00:41:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.012 00:41:51 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:40.012 * Looking for test storage... 00:07:40.012 00:41:51 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:40.012 00:41:51 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:40.012 00:41:51 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.012 00:41:51 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:40.012 00:41:51 -- common/autotest_common.sh@373 -- # mount=/home 00:07:40.012 00:41:51 -- common/autotest_common.sh@375 -- # target_space=13293785088 00:07:40.012 00:41:51 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:40.012 00:41:51 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:40.012 00:41:51 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:40.012 00:41:51 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:40.012 00:41:51 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:40.012 00:41:51 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.012 00:41:51 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.012 00:41:51 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.012 00:41:51 -- common/autotest_common.sh@390 -- # return 0 00:07:40.012 00:41:51 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:40.012 00:41:51 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:40.012 00:41:51 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:40.012 00:41:51 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:40.012 00:41:51 -- common/autotest_common.sh@1682 -- # true 00:07:40.012 00:41:51 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:40.012 00:41:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:40.012 00:41:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:40.012 00:41:51 -- common/autotest_common.sh@27 -- # exec 00:07:40.012 00:41:51 -- common/autotest_common.sh@29 -- # exec 00:07:40.012 00:41:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:40.012 00:41:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:40.012 00:41:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:40.012 00:41:51 -- common/autotest_common.sh@18 -- # set -x 00:07:40.012 00:41:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.012 00:41:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.012 00:41:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.012 00:41:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.012 00:41:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.012 00:41:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.012 00:41:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.012 00:41:51 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.012 00:41:51 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.012 00:41:51 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.012 00:41:51 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.012 00:41:51 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.012 00:41:51 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.012 00:41:51 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.012 00:41:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.012 00:41:51 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.012 00:41:51 -- scripts/common.sh@344 -- # : 1 00:07:40.012 00:41:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.012 00:41:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.012 00:41:51 -- scripts/common.sh@364 -- # decimal 1 00:07:40.012 00:41:51 -- scripts/common.sh@352 -- # local d=1 00:07:40.012 00:41:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.012 00:41:51 -- scripts/common.sh@354 -- # echo 1 00:07:40.012 00:41:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.012 00:41:51 -- scripts/common.sh@365 -- # decimal 2 00:07:40.012 00:41:51 -- scripts/common.sh@352 -- # local d=2 00:07:40.013 00:41:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.013 00:41:51 -- scripts/common.sh@354 -- # echo 2 00:07:40.013 00:41:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.013 00:41:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.013 00:41:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.013 00:41:51 -- scripts/common.sh@367 -- # return 0 00:07:40.013 00:41:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.013 00:41:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.013 00:41:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.013 00:41:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.013 00:41:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.013 00:41:51 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.013 00:41:51 -- nvmf/common.sh@7 -- # uname -s 00:07:40.013 00:41:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.013 00:41:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.013 00:41:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.013 00:41:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.013 00:41:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.013 00:41:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.013 00:41:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.013 00:41:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.013 00:41:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.013 00:41:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.013 00:41:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:07:40.013 00:41:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:07:40.013 00:41:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.013 00:41:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.013 00:41:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:40.013 00:41:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.013 00:41:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.013 00:41:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.013 00:41:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.013 00:41:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.013 00:41:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.013 00:41:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.013 00:41:51 -- paths/export.sh@5 -- # export PATH 00:07:40.013 00:41:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.013 00:41:51 -- nvmf/common.sh@46 -- # : 0 00:07:40.013 00:41:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.013 00:41:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.013 00:41:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.013 00:41:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.013 00:41:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.013 00:41:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.013 00:41:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.013 00:41:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.013 00:41:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:40.013 00:41:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:40.013 00:41:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:40.014 00:41:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:40.014 00:41:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.014 00:41:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:40.014 00:41:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:40.014 00:41:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:40.014 00:41:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.014 00:41:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.014 00:41:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.014 00:41:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:40.014 00:41:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.014 00:41:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.014 00:41:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:40.014 00:41:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:40.014 00:41:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:40.014 00:41:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:40.014 00:41:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:40.014 00:41:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.014 00:41:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:40.014 00:41:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:40.014 00:41:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:40.014 00:41:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:40.014 00:41:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:40.014 00:41:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:40.014 Cannot find device "nvmf_tgt_br" 00:07:40.014 00:41:51 -- nvmf/common.sh@154 -- # true 00:07:40.014 00:41:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:40.014 Cannot find device "nvmf_tgt_br2" 00:07:40.014 00:41:51 -- nvmf/common.sh@155 -- # true 00:07:40.014 00:41:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:40.014 00:41:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:40.014 Cannot find device "nvmf_tgt_br" 00:07:40.014 00:41:51 -- nvmf/common.sh@157 -- # true 00:07:40.014 00:41:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:40.014 Cannot find device "nvmf_tgt_br2" 00:07:40.014 00:41:51 -- nvmf/common.sh@158 -- # true 00:07:40.014 00:41:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:40.014 00:41:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:40.014 00:41:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:40.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.014 00:41:51 -- nvmf/common.sh@161 -- # true 00:07:40.014 00:41:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:40.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.014 00:41:51 -- nvmf/common.sh@162 -- # true 00:07:40.014 00:41:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:40.014 00:41:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:40.014 00:41:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:40.014 00:41:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:40.014 00:41:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:40.014 00:41:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:40.014 00:41:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:40.014 00:41:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:40.014 00:41:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:40.014 00:41:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:40.014 00:41:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:40.014 00:41:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:40.014 00:41:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:40.014 00:41:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:40.014 00:41:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:40.014 00:41:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:40.014 00:41:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:40.014 00:41:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:40.014 00:41:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:40.014 00:41:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:40.014 00:41:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:40.014 00:41:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:40.014 00:41:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:40.014 00:41:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:40.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:07:40.014 00:07:40.014 --- 10.0.0.2 ping statistics --- 00:07:40.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.014 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:40.014 00:41:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:40.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:40.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:40.014 00:07:40.014 --- 10.0.0.3 ping statistics --- 00:07:40.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.014 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:40.014 00:41:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:40.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:07:40.014 00:07:40.014 --- 10.0.0.1 ping statistics --- 00:07:40.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.014 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:40.014 00:41:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.014 00:41:51 -- nvmf/common.sh@421 -- # return 0 00:07:40.014 00:41:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:40.014 00:41:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.014 00:41:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:40.014 00:41:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.014 00:41:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:40.014 00:41:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:40.014 00:41:51 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:40.014 00:41:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:40.014 00:41:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.014 00:41:51 -- common/autotest_common.sh@10 -- # set +x 00:07:40.014 ************************************ 00:07:40.014 START TEST nvmf_filesystem_no_in_capsule 00:07:40.014 ************************************ 00:07:40.014 00:41:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:40.014 00:41:51 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:40.014 00:41:51 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.014 00:41:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:40.014 00:41:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.014 00:41:51 -- common/autotest_common.sh@10 -- # set +x 00:07:40.014 00:41:51 -- nvmf/common.sh@469 -- # nvmfpid=72525 00:07:40.015 00:41:51 -- nvmf/common.sh@470 -- # waitforlisten 72525 00:07:40.015 00:41:51 -- common/autotest_common.sh@829 -- # '[' -z 72525 ']' 00:07:40.015 00:41:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.015 00:41:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.015 00:41:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.015 00:41:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.015 00:41:51 -- common/autotest_common.sh@10 -- # set +x 00:07:40.015 00:41:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.015 [2024-12-03 00:41:51.690936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.015 [2024-12-03 00:41:51.691022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.015 [2024-12-03 00:41:51.836111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.015 [2024-12-03 00:41:51.917490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.015 [2024-12-03 00:41:51.918302] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.015 [2024-12-03 00:41:51.918844] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.015 [2024-12-03 00:41:51.919366] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.015 [2024-12-03 00:41:51.920037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.015 [2024-12-03 00:41:51.920179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.015 [2024-12-03 00:41:51.920532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.015 [2024-12-03 00:41:51.920602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.273 00:41:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.273 00:41:52 -- common/autotest_common.sh@862 -- # return 0 00:07:40.273 00:41:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:40.273 00:41:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.273 00:41:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.273 00:41:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.273 00:41:52 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:40.273 00:41:52 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:40.273 00:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.273 00:41:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 [2024-12-03 00:41:52.794249] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.532 00:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.532 00:41:52 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:40.532 00:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.532 00:41:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 Malloc1 00:07:40.532 00:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.532 00:41:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.532 00:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.532 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 00:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.532 00:41:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.532 00:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.532 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 00:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.532 00:41:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.532 00:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.532 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 [2024-12-03 00:41:53.032496] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.532 00:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.532 00:41:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.532 00:41:53 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:40.532 00:41:53 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:40.532 00:41:53 -- common/autotest_common.sh@1369 -- # local bs 00:07:40.532 00:41:53 -- common/autotest_common.sh@1370 -- # local nb 00:07:40.532 00:41:53 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.532 00:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.532 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.790 00:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.790 00:41:53 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:40.790 { 00:07:40.790 "aliases": [ 00:07:40.790 "703bfd6d-9364-4eee-ab80-a7f757bcd58c" 00:07:40.790 ], 00:07:40.790 "assigned_rate_limits": { 00:07:40.790 "r_mbytes_per_sec": 0, 00:07:40.790 "rw_ios_per_sec": 0, 00:07:40.790 "rw_mbytes_per_sec": 0, 00:07:40.790 "w_mbytes_per_sec": 0 00:07:40.790 }, 00:07:40.790 "block_size": 512, 00:07:40.790 "claim_type": "exclusive_write", 00:07:40.790 "claimed": true, 00:07:40.790 "driver_specific": {}, 00:07:40.790 "memory_domains": [ 00:07:40.790 { 00:07:40.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.790 "dma_device_type": 2 00:07:40.790 } 00:07:40.790 ], 00:07:40.790 "name": "Malloc1", 00:07:40.790 "num_blocks": 1048576, 00:07:40.790 "product_name": "Malloc disk", 00:07:40.790 "supported_io_types": { 00:07:40.790 "abort": true, 00:07:40.790 "compare": false, 00:07:40.790 "compare_and_write": false, 00:07:40.790 "flush": true, 00:07:40.790 "nvme_admin": false, 00:07:40.790 "nvme_io": false, 00:07:40.790 "read": true, 00:07:40.790 "reset": true, 00:07:40.790 "unmap": true, 00:07:40.790 "write": true, 00:07:40.790 "write_zeroes": true 00:07:40.790 }, 00:07:40.790 "uuid": "703bfd6d-9364-4eee-ab80-a7f757bcd58c", 00:07:40.790 "zoned": false 00:07:40.790 } 00:07:40.790 ]' 00:07:40.790 00:41:53 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:40.790 00:41:53 -- common/autotest_common.sh@1372 -- # bs=512 00:07:40.790 00:41:53 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:40.790 00:41:53 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:40.790 00:41:53 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:40.790 00:41:53 -- common/autotest_common.sh@1377 -- # echo 512 00:07:40.790 00:41:53 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.790 00:41:53 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.048 00:41:53 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.048 00:41:53 -- common/autotest_common.sh@1187 -- # local i=0 00:07:41.048 00:41:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.048 00:41:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:41.048 00:41:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:42.947 00:41:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:42.947 00:41:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:42.947 00:41:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.947 00:41:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:42.947 00:41:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.947 00:41:55 -- common/autotest_common.sh@1197 -- # return 0 00:07:42.947 00:41:55 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:42.947 00:41:55 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:42.947 00:41:55 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:42.947 00:41:55 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:42.947 00:41:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:42.947 00:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:42.947 00:41:55 -- setup/common.sh@80 -- # echo 536870912 00:07:42.947 00:41:55 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:42.947 00:41:55 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:42.947 00:41:55 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:42.947 00:41:55 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:42.947 00:41:55 -- target/filesystem.sh@69 -- # partprobe 00:07:43.205 00:41:55 -- target/filesystem.sh@70 -- # sleep 1 00:07:44.139 00:41:56 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:44.139 00:41:56 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:44.139 00:41:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.139 00:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.139 00:41:56 -- common/autotest_common.sh@10 -- # set +x 00:07:44.139 ************************************ 00:07:44.139 START TEST filesystem_ext4 00:07:44.139 ************************************ 00:07:44.139 00:41:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:44.139 00:41:56 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:44.139 00:41:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.139 00:41:56 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:44.139 00:41:56 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:44.139 00:41:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.139 00:41:56 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.139 00:41:56 -- common/autotest_common.sh@915 -- # local force 00:07:44.139 00:41:56 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:44.139 00:41:56 -- common/autotest_common.sh@918 -- # force=-F 00:07:44.139 00:41:56 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:44.139 mke2fs 1.47.0 (5-Feb-2023) 00:07:44.139 Discarding device blocks: 0/522240 done 00:07:44.139 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:44.139 Filesystem UUID: cf602c5d-387b-43bf-a3c2-eb832391884c 00:07:44.139 Superblock backups stored on blocks: 00:07:44.139 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:44.139 00:07:44.139 Allocating group tables: 0/64 done 00:07:44.139 Writing inode tables: 0/64 done 00:07:44.398 Creating journal (8192 blocks): done 00:07:44.398 Writing superblocks and filesystem accounting information: 0/64 done 00:07:44.398 00:07:44.398 00:41:56 -- common/autotest_common.sh@931 -- # return 0 00:07:44.398 00:41:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.666 00:42:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.666 00:42:02 -- target/filesystem.sh@25 -- # sync 00:07:49.666 00:42:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.666 00:42:02 -- target/filesystem.sh@27 -- # sync 00:07:49.666 00:42:02 -- target/filesystem.sh@29 -- # i=0 00:07:49.666 00:42:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.666 00:42:02 -- target/filesystem.sh@37 -- # kill -0 72525 00:07:49.666 00:42:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.666 00:42:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.666 00:42:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.666 00:42:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.666 00:07:49.666 real 0m5.575s 00:07:49.666 user 0m0.023s 00:07:49.666 sys 0m0.062s 00:07:49.666 00:42:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.666 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.666 ************************************ 00:07:49.666 END TEST filesystem_ext4 00:07:49.666 ************************************ 00:07:49.666 00:42:02 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:49.666 00:42:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.666 00:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.666 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.666 ************************************ 00:07:49.666 START TEST filesystem_btrfs 00:07:49.666 ************************************ 00:07:49.666 00:42:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:49.666 00:42:02 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:49.666 00:42:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.666 00:42:02 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:49.666 00:42:02 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:49.666 00:42:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.666 00:42:02 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.666 00:42:02 -- common/autotest_common.sh@915 -- # local force 00:07:49.666 00:42:02 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:49.666 00:42:02 -- common/autotest_common.sh@920 -- # force=-f 00:07:49.666 00:42:02 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:49.924 btrfs-progs v6.8.1 00:07:49.924 See https://btrfs.readthedocs.io for more information. 00:07:49.924 00:07:49.924 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:49.924 NOTE: several default settings have changed in version 5.15, please make sure 00:07:49.924 this does not affect your deployments: 00:07:49.924 - DUP for metadata (-m dup) 00:07:49.924 - enabled no-holes (-O no-holes) 00:07:49.924 - enabled free-space-tree (-R free-space-tree) 00:07:49.924 00:07:49.924 Label: (null) 00:07:49.924 UUID: 7f971de0-c844-4d8e-b765-24186af648c6 00:07:49.924 Node size: 16384 00:07:49.924 Sector size: 4096 (CPU page size: 4096) 00:07:49.924 Filesystem size: 510.00MiB 00:07:49.924 Block group profiles: 00:07:49.924 Data: single 8.00MiB 00:07:49.924 Metadata: DUP 32.00MiB 00:07:49.924 System: DUP 8.00MiB 00:07:49.924 SSD detected: yes 00:07:49.924 Zoned device: no 00:07:49.924 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:49.924 Checksum: crc32c 00:07:49.924 Number of devices: 1 00:07:49.924 Devices: 00:07:49.924 ID SIZE PATH 00:07:49.924 1 510.00MiB /dev/nvme0n1p1 00:07:49.924 00:07:49.924 00:42:02 -- common/autotest_common.sh@931 -- # return 0 00:07:49.924 00:42:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.182 00:42:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.182 00:42:02 -- target/filesystem.sh@25 -- # sync 00:07:50.182 00:42:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.182 00:42:02 -- target/filesystem.sh@27 -- # sync 00:07:50.183 00:42:02 -- target/filesystem.sh@29 -- # i=0 00:07:50.183 00:42:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.183 00:42:02 -- target/filesystem.sh@37 -- # kill -0 72525 00:07:50.183 00:42:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.183 00:42:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.183 00:42:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.183 00:42:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.183 00:07:50.183 real 0m0.327s 00:07:50.183 user 0m0.018s 00:07:50.183 sys 0m0.068s 00:07:50.183 00:42:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.183 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:07:50.183 ************************************ 00:07:50.183 END TEST filesystem_btrfs 00:07:50.183 ************************************ 00:07:50.183 00:42:02 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:50.183 00:42:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.183 00:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.183 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:07:50.183 ************************************ 00:07:50.183 START TEST filesystem_xfs 00:07:50.183 ************************************ 00:07:50.183 00:42:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:50.183 00:42:02 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:50.183 00:42:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.183 00:42:02 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:50.183 00:42:02 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:50.183 00:42:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:50.183 00:42:02 -- common/autotest_common.sh@914 -- # local i=0 00:07:50.183 00:42:02 -- common/autotest_common.sh@915 -- # local force 00:07:50.183 00:42:02 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:50.183 00:42:02 -- common/autotest_common.sh@920 -- # force=-f 00:07:50.183 00:42:02 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:50.183 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:50.183 = sectsz=512 attr=2, projid32bit=1 00:07:50.183 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:50.183 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:50.183 data = bsize=4096 blocks=130560, imaxpct=25 00:07:50.183 = sunit=0 swidth=0 blks 00:07:50.183 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:50.183 log =internal log bsize=4096 blocks=16384, version=2 00:07:50.183 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:50.183 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:51.119 Discarding blocks...Done. 00:07:51.119 00:42:03 -- common/autotest_common.sh@931 -- # return 0 00:07:51.119 00:42:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.650 00:42:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.650 00:42:05 -- target/filesystem.sh@25 -- # sync 00:07:53.650 00:42:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.650 00:42:05 -- target/filesystem.sh@27 -- # sync 00:07:53.650 00:42:05 -- target/filesystem.sh@29 -- # i=0 00:07:53.650 00:42:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.650 00:42:05 -- target/filesystem.sh@37 -- # kill -0 72525 00:07:53.650 00:42:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.650 00:42:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.650 00:42:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.650 00:42:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.650 00:07:53.650 real 0m3.183s 00:07:53.650 user 0m0.023s 00:07:53.650 sys 0m0.065s 00:07:53.650 00:42:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.650 00:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 ************************************ 00:07:53.650 END TEST filesystem_xfs 00:07:53.650 ************************************ 00:07:53.650 00:42:05 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:53.650 00:42:05 -- target/filesystem.sh@93 -- # sync 00:07:53.650 00:42:05 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:53.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.650 00:42:05 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:53.650 00:42:05 -- common/autotest_common.sh@1208 -- # local i=0 00:07:53.650 00:42:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:53.650 00:42:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.650 00:42:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:53.650 00:42:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.650 00:42:05 -- common/autotest_common.sh@1220 -- # return 0 00:07:53.650 00:42:05 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.650 00:42:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.650 00:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 00:42:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.650 00:42:05 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:53.650 00:42:05 -- target/filesystem.sh@101 -- # killprocess 72525 00:07:53.650 00:42:05 -- common/autotest_common.sh@936 -- # '[' -z 72525 ']' 00:07:53.650 00:42:05 -- common/autotest_common.sh@940 -- # kill -0 72525 00:07:53.650 00:42:05 -- common/autotest_common.sh@941 -- # uname 00:07:53.650 00:42:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.650 00:42:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72525 00:07:53.650 00:42:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.650 00:42:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.650 killing process with pid 72525 00:07:53.650 00:42:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72525' 00:07:53.650 00:42:05 -- common/autotest_common.sh@955 -- # kill 72525 00:07:53.650 00:42:05 -- common/autotest_common.sh@960 -- # wait 72525 00:07:54.218 00:42:06 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:54.218 00:07:54.218 real 0m14.851s 00:07:54.218 user 0m57.270s 00:07:54.218 sys 0m1.755s 00:07:54.218 00:42:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.218 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 ************************************ 00:07:54.218 END TEST nvmf_filesystem_no_in_capsule 00:07:54.218 ************************************ 00:07:54.218 00:42:06 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:54.218 00:42:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.218 00:42:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.218 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 ************************************ 00:07:54.218 START TEST nvmf_filesystem_in_capsule 00:07:54.218 ************************************ 00:07:54.218 00:42:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:54.218 00:42:06 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:54.218 00:42:06 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.218 00:42:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:54.218 00:42:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.218 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 00:42:06 -- nvmf/common.sh@469 -- # nvmfpid=72897 00:07:54.218 00:42:06 -- nvmf/common.sh@470 -- # waitforlisten 72897 00:07:54.218 00:42:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.218 00:42:06 -- common/autotest_common.sh@829 -- # '[' -z 72897 ']' 00:07:54.218 00:42:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.218 00:42:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.218 00:42:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.218 00:42:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.218 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 [2024-12-03 00:42:06.582976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.218 [2024-12-03 00:42:06.583044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.218 [2024-12-03 00:42:06.720953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.478 [2024-12-03 00:42:06.786861] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.478 [2024-12-03 00:42:06.787026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.478 [2024-12-03 00:42:06.787042] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.478 [2024-12-03 00:42:06.787051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.478 [2024-12-03 00:42:06.787207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.478 [2024-12-03 00:42:06.787321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.478 [2024-12-03 00:42:06.787858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.478 [2024-12-03 00:42:06.787906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.046 00:42:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.046 00:42:07 -- common/autotest_common.sh@862 -- # return 0 00:07:55.046 00:42:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:55.046 00:42:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.046 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.046 00:42:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.046 00:42:07 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:55.046 00:42:07 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:55.046 00:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.046 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.046 [2024-12-03 00:42:07.549659] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.305 00:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.305 00:42:07 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:55.305 00:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.305 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.305 Malloc1 00:07:55.305 00:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.305 00:42:07 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.305 00:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.305 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.306 00:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.306 00:42:07 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:55.306 00:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.306 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.306 00:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.306 00:42:07 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.306 00:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.306 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.306 [2024-12-03 00:42:07.787442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.306 00:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.306 00:42:07 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:55.306 00:42:07 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:55.306 00:42:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:55.306 00:42:07 -- common/autotest_common.sh@1369 -- # local bs 00:07:55.306 00:42:07 -- common/autotest_common.sh@1370 -- # local nb 00:07:55.306 00:42:07 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:55.306 00:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.306 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.306 00:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.306 00:42:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:55.306 { 00:07:55.306 "aliases": [ 00:07:55.306 "c0c04403-0896-4af6-b703-7fabf9b9ac16" 00:07:55.306 ], 00:07:55.306 "assigned_rate_limits": { 00:07:55.306 "r_mbytes_per_sec": 0, 00:07:55.306 "rw_ios_per_sec": 0, 00:07:55.306 "rw_mbytes_per_sec": 0, 00:07:55.306 "w_mbytes_per_sec": 0 00:07:55.306 }, 00:07:55.306 "block_size": 512, 00:07:55.306 "claim_type": "exclusive_write", 00:07:55.306 "claimed": true, 00:07:55.306 "driver_specific": {}, 00:07:55.306 "memory_domains": [ 00:07:55.306 { 00:07:55.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.306 "dma_device_type": 2 00:07:55.306 } 00:07:55.306 ], 00:07:55.306 "name": "Malloc1", 00:07:55.306 "num_blocks": 1048576, 00:07:55.306 "product_name": "Malloc disk", 00:07:55.306 "supported_io_types": { 00:07:55.306 "abort": true, 00:07:55.306 "compare": false, 00:07:55.306 "compare_and_write": false, 00:07:55.306 "flush": true, 00:07:55.306 "nvme_admin": false, 00:07:55.306 "nvme_io": false, 00:07:55.306 "read": true, 00:07:55.306 "reset": true, 00:07:55.306 "unmap": true, 00:07:55.306 "write": true, 00:07:55.306 "write_zeroes": true 00:07:55.306 }, 00:07:55.306 "uuid": "c0c04403-0896-4af6-b703-7fabf9b9ac16", 00:07:55.306 "zoned": false 00:07:55.306 } 00:07:55.306 ]' 00:07:55.306 00:42:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:55.565 00:42:07 -- common/autotest_common.sh@1372 -- # bs=512 00:07:55.565 00:42:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:55.565 00:42:07 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:55.565 00:42:07 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:55.565 00:42:07 -- common/autotest_common.sh@1377 -- # echo 512 00:07:55.565 00:42:07 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:55.565 00:42:07 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.825 00:42:08 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.825 00:42:08 -- common/autotest_common.sh@1187 -- # local i=0 00:07:55.825 00:42:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.825 00:42:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:55.825 00:42:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:57.730 00:42:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:57.730 00:42:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:57.730 00:42:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.730 00:42:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:57.730 00:42:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.730 00:42:10 -- common/autotest_common.sh@1197 -- # return 0 00:07:57.730 00:42:10 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.730 00:42:10 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.730 00:42:10 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.730 00:42:10 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.730 00:42:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.730 00:42:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.730 00:42:10 -- setup/common.sh@80 -- # echo 536870912 00:07:57.730 00:42:10 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.730 00:42:10 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.730 00:42:10 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.730 00:42:10 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.730 00:42:10 -- target/filesystem.sh@69 -- # partprobe 00:07:57.730 00:42:10 -- target/filesystem.sh@70 -- # sleep 1 00:07:59.106 00:42:11 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:59.106 00:42:11 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:59.106 00:42:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:59.106 00:42:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.106 00:42:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.106 ************************************ 00:07:59.106 START TEST filesystem_in_capsule_ext4 00:07:59.106 ************************************ 00:07:59.106 00:42:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:59.106 00:42:11 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:59.106 00:42:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.106 00:42:11 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:59.106 00:42:11 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:59.106 00:42:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:59.106 00:42:11 -- common/autotest_common.sh@914 -- # local i=0 00:07:59.106 00:42:11 -- common/autotest_common.sh@915 -- # local force 00:07:59.106 00:42:11 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:59.106 00:42:11 -- common/autotest_common.sh@918 -- # force=-F 00:07:59.106 00:42:11 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:59.106 mke2fs 1.47.0 (5-Feb-2023) 00:07:59.106 Discarding device blocks: 0/522240 done 00:07:59.106 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.106 Filesystem UUID: 3fd6d7a1-754a-43fe-b0d7-391e352c301b 00:07:59.106 Superblock backups stored on blocks: 00:07:59.106 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.106 00:07:59.106 Allocating group tables: 0/64 done 00:07:59.106 Writing inode tables: 0/64 done 00:07:59.106 Creating journal (8192 blocks): done 00:07:59.106 Writing superblocks and filesystem accounting information: 0/64 done 00:07:59.106 00:07:59.106 00:42:11 -- common/autotest_common.sh@931 -- # return 0 00:07:59.106 00:42:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.405 00:42:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.405 00:42:16 -- target/filesystem.sh@25 -- # sync 00:08:04.405 00:42:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.405 00:42:16 -- target/filesystem.sh@27 -- # sync 00:08:04.405 00:42:16 -- target/filesystem.sh@29 -- # i=0 00:08:04.405 00:42:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.405 00:42:16 -- target/filesystem.sh@37 -- # kill -0 72897 00:08:04.405 00:42:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.405 00:42:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.405 00:42:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.405 00:42:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.405 ************************************ 00:08:04.405 END TEST filesystem_in_capsule_ext4 00:08:04.405 ************************************ 00:08:04.405 00:08:04.405 real 0m5.559s 00:08:04.405 user 0m0.028s 00:08:04.405 sys 0m0.062s 00:08:04.405 00:42:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.405 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:04.405 00:42:16 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.405 00:42:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:04.405 00:42:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.405 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:04.405 ************************************ 00:08:04.405 START TEST filesystem_in_capsule_btrfs 00:08:04.405 ************************************ 00:08:04.405 00:42:16 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.405 00:42:16 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.405 00:42:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.405 00:42:16 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.405 00:42:16 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:04.405 00:42:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:04.405 00:42:16 -- common/autotest_common.sh@914 -- # local i=0 00:08:04.405 00:42:16 -- common/autotest_common.sh@915 -- # local force 00:08:04.405 00:42:16 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:04.405 00:42:16 -- common/autotest_common.sh@920 -- # force=-f 00:08:04.405 00:42:16 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:04.664 btrfs-progs v6.8.1 00:08:04.664 See https://btrfs.readthedocs.io for more information. 00:08:04.664 00:08:04.664 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:04.664 NOTE: several default settings have changed in version 5.15, please make sure 00:08:04.664 this does not affect your deployments: 00:08:04.664 - DUP for metadata (-m dup) 00:08:04.664 - enabled no-holes (-O no-holes) 00:08:04.664 - enabled free-space-tree (-R free-space-tree) 00:08:04.664 00:08:04.664 Label: (null) 00:08:04.664 UUID: c04d616a-c1bf-4211-b9e6-bc30b6386d27 00:08:04.664 Node size: 16384 00:08:04.664 Sector size: 4096 (CPU page size: 4096) 00:08:04.664 Filesystem size: 510.00MiB 00:08:04.664 Block group profiles: 00:08:04.664 Data: single 8.00MiB 00:08:04.664 Metadata: DUP 32.00MiB 00:08:04.664 System: DUP 8.00MiB 00:08:04.664 SSD detected: yes 00:08:04.664 Zoned device: no 00:08:04.664 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:04.664 Checksum: crc32c 00:08:04.664 Number of devices: 1 00:08:04.664 Devices: 00:08:04.664 ID SIZE PATH 00:08:04.664 1 510.00MiB /dev/nvme0n1p1 00:08:04.664 00:08:04.664 00:42:16 -- common/autotest_common.sh@931 -- # return 0 00:08:04.664 00:42:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.664 00:42:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.664 00:42:17 -- target/filesystem.sh@25 -- # sync 00:08:04.664 00:42:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.664 00:42:17 -- target/filesystem.sh@27 -- # sync 00:08:04.664 00:42:17 -- target/filesystem.sh@29 -- # i=0 00:08:04.664 00:42:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.664 00:42:17 -- target/filesystem.sh@37 -- # kill -0 72897 00:08:04.664 00:42:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.664 00:42:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.664 00:42:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.664 00:42:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.664 ************************************ 00:08:04.664 END TEST filesystem_in_capsule_btrfs 00:08:04.664 ************************************ 00:08:04.664 00:08:04.664 real 0m0.228s 00:08:04.664 user 0m0.020s 00:08:04.664 sys 0m0.062s 00:08:04.664 00:42:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.664 00:42:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.664 00:42:17 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.664 00:42:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:04.664 00:42:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.664 00:42:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.664 ************************************ 00:08:04.664 START TEST filesystem_in_capsule_xfs 00:08:04.664 ************************************ 00:08:04.664 00:42:17 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.664 00:42:17 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.664 00:42:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.664 00:42:17 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.664 00:42:17 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:04.664 00:42:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:04.664 00:42:17 -- common/autotest_common.sh@914 -- # local i=0 00:08:04.664 00:42:17 -- common/autotest_common.sh@915 -- # local force 00:08:04.664 00:42:17 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:04.664 00:42:17 -- common/autotest_common.sh@920 -- # force=-f 00:08:04.664 00:42:17 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.923 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.923 = sectsz=512 attr=2, projid32bit=1 00:08:04.923 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.923 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.923 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.923 = sunit=0 swidth=0 blks 00:08:04.923 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.923 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.923 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.923 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.491 Discarding blocks...Done. 00:08:05.491 00:42:17 -- common/autotest_common.sh@931 -- # return 0 00:08:05.491 00:42:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.395 00:42:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.395 00:42:19 -- target/filesystem.sh@25 -- # sync 00:08:07.395 00:42:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.395 00:42:19 -- target/filesystem.sh@27 -- # sync 00:08:07.395 00:42:19 -- target/filesystem.sh@29 -- # i=0 00:08:07.395 00:42:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.395 00:42:19 -- target/filesystem.sh@37 -- # kill -0 72897 00:08:07.395 00:42:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.395 00:42:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.395 00:42:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.395 00:42:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.395 ************************************ 00:08:07.395 END TEST filesystem_in_capsule_xfs 00:08:07.395 ************************************ 00:08:07.395 00:08:07.395 real 0m2.658s 00:08:07.395 user 0m0.026s 00:08:07.395 sys 0m0.054s 00:08:07.395 00:42:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.395 00:42:19 -- common/autotest_common.sh@10 -- # set +x 00:08:07.395 00:42:19 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:07.395 00:42:19 -- target/filesystem.sh@93 -- # sync 00:08:07.395 00:42:19 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.654 00:42:19 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.654 00:42:19 -- common/autotest_common.sh@1208 -- # local i=0 00:08:07.654 00:42:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:07.654 00:42:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.654 00:42:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:07.654 00:42:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.654 00:42:20 -- common/autotest_common.sh@1220 -- # return 0 00:08:07.654 00:42:20 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.654 00:42:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.654 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:07.654 00:42:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.654 00:42:20 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:07.654 00:42:20 -- target/filesystem.sh@101 -- # killprocess 72897 00:08:07.654 00:42:20 -- common/autotest_common.sh@936 -- # '[' -z 72897 ']' 00:08:07.654 00:42:20 -- common/autotest_common.sh@940 -- # kill -0 72897 00:08:07.654 00:42:20 -- common/autotest_common.sh@941 -- # uname 00:08:07.654 00:42:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:07.654 00:42:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72897 00:08:07.654 killing process with pid 72897 00:08:07.654 00:42:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:07.654 00:42:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:07.654 00:42:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72897' 00:08:07.654 00:42:20 -- common/autotest_common.sh@955 -- # kill 72897 00:08:07.654 00:42:20 -- common/autotest_common.sh@960 -- # wait 72897 00:08:08.221 ************************************ 00:08:08.221 END TEST nvmf_filesystem_in_capsule 00:08:08.221 ************************************ 00:08:08.222 00:42:20 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:08.222 00:08:08.222 real 0m14.068s 00:08:08.222 user 0m54.291s 00:08:08.222 sys 0m1.595s 00:08:08.222 00:42:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.222 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.222 00:42:20 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:08.222 00:42:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:08.222 00:42:20 -- nvmf/common.sh@116 -- # sync 00:08:08.222 00:42:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:08.222 00:42:20 -- nvmf/common.sh@119 -- # set +e 00:08:08.222 00:42:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:08.222 00:42:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:08.222 rmmod nvme_tcp 00:08:08.222 rmmod nvme_fabrics 00:08:08.222 rmmod nvme_keyring 00:08:08.222 00:42:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:08.222 00:42:20 -- nvmf/common.sh@123 -- # set -e 00:08:08.222 00:42:20 -- nvmf/common.sh@124 -- # return 0 00:08:08.222 00:42:20 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:08.222 00:42:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:08.222 00:42:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:08.222 00:42:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:08.222 00:42:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.222 00:42:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:08.222 00:42:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.222 00:42:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.222 00:42:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.480 00:42:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:08.480 ************************************ 00:08:08.480 END TEST nvmf_filesystem 00:08:08.480 ************************************ 00:08:08.480 00:08:08.480 real 0m29.918s 00:08:08.480 user 1m51.930s 00:08:08.480 sys 0m3.775s 00:08:08.480 00:42:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.480 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.480 00:42:20 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:08.480 00:42:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.480 00:42:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.480 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.480 ************************************ 00:08:08.480 START TEST nvmf_discovery 00:08:08.480 ************************************ 00:08:08.480 00:42:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:08.480 * Looking for test storage... 00:08:08.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.480 00:42:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.480 00:42:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.480 00:42:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.738 00:42:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.738 00:42:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.738 00:42:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.738 00:42:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.738 00:42:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.738 00:42:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.738 00:42:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.738 00:42:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.738 00:42:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.738 00:42:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.738 00:42:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.738 00:42:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.738 00:42:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.738 00:42:21 -- scripts/common.sh@344 -- # : 1 00:08:08.738 00:42:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.738 00:42:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.738 00:42:21 -- scripts/common.sh@364 -- # decimal 1 00:08:08.738 00:42:21 -- scripts/common.sh@352 -- # local d=1 00:08:08.738 00:42:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.738 00:42:21 -- scripts/common.sh@354 -- # echo 1 00:08:08.738 00:42:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.738 00:42:21 -- scripts/common.sh@365 -- # decimal 2 00:08:08.738 00:42:21 -- scripts/common.sh@352 -- # local d=2 00:08:08.738 00:42:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.738 00:42:21 -- scripts/common.sh@354 -- # echo 2 00:08:08.738 00:42:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.738 00:42:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.738 00:42:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.738 00:42:21 -- scripts/common.sh@367 -- # return 0 00:08:08.738 00:42:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.738 00:42:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.738 --rc genhtml_branch_coverage=1 00:08:08.738 --rc genhtml_function_coverage=1 00:08:08.738 --rc genhtml_legend=1 00:08:08.738 --rc geninfo_all_blocks=1 00:08:08.738 --rc geninfo_unexecuted_blocks=1 00:08:08.738 00:08:08.738 ' 00:08:08.738 00:42:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.738 --rc genhtml_branch_coverage=1 00:08:08.738 --rc genhtml_function_coverage=1 00:08:08.738 --rc genhtml_legend=1 00:08:08.738 --rc geninfo_all_blocks=1 00:08:08.738 --rc geninfo_unexecuted_blocks=1 00:08:08.738 00:08:08.738 ' 00:08:08.738 00:42:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.738 --rc genhtml_branch_coverage=1 00:08:08.738 --rc genhtml_function_coverage=1 00:08:08.738 --rc genhtml_legend=1 00:08:08.738 --rc geninfo_all_blocks=1 00:08:08.738 --rc geninfo_unexecuted_blocks=1 00:08:08.738 00:08:08.738 ' 00:08:08.738 00:42:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.738 --rc genhtml_branch_coverage=1 00:08:08.738 --rc genhtml_function_coverage=1 00:08:08.738 --rc genhtml_legend=1 00:08:08.738 --rc geninfo_all_blocks=1 00:08:08.738 --rc geninfo_unexecuted_blocks=1 00:08:08.738 00:08:08.738 ' 00:08:08.738 00:42:21 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.738 00:42:21 -- nvmf/common.sh@7 -- # uname -s 00:08:08.738 00:42:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.738 00:42:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.738 00:42:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.738 00:42:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.738 00:42:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.738 00:42:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.738 00:42:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.738 00:42:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.738 00:42:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.738 00:42:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.738 00:42:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:08:08.738 00:42:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:08:08.738 00:42:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.738 00:42:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.738 00:42:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.738 00:42:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.738 00:42:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.738 00:42:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.738 00:42:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.738 00:42:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.738 00:42:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.738 00:42:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.738 00:42:21 -- paths/export.sh@5 -- # export PATH 00:08:08.738 00:42:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.738 00:42:21 -- nvmf/common.sh@46 -- # : 0 00:08:08.738 00:42:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:08.738 00:42:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:08.738 00:42:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:08.738 00:42:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.738 00:42:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.738 00:42:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:08.738 00:42:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:08.738 00:42:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:08.738 00:42:21 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:08.738 00:42:21 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:08.738 00:42:21 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:08.738 00:42:21 -- target/discovery.sh@15 -- # hash nvme 00:08:08.738 00:42:21 -- target/discovery.sh@20 -- # nvmftestinit 00:08:08.738 00:42:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:08.738 00:42:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.738 00:42:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:08.738 00:42:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:08.738 00:42:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:08.738 00:42:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.738 00:42:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.738 00:42:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.738 00:42:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:08.738 00:42:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:08.738 00:42:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:08.738 00:42:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:08.738 00:42:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:08.738 00:42:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:08.738 00:42:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.738 00:42:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.738 00:42:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.738 00:42:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:08.738 00:42:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.738 00:42:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.738 00:42:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.738 00:42:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.738 00:42:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.738 00:42:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.738 00:42:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.738 00:42:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.738 00:42:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:08.738 00:42:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:08.738 Cannot find device "nvmf_tgt_br" 00:08:08.738 00:42:21 -- nvmf/common.sh@154 -- # true 00:08:08.738 00:42:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.738 Cannot find device "nvmf_tgt_br2" 00:08:08.738 00:42:21 -- nvmf/common.sh@155 -- # true 00:08:08.738 00:42:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:08.738 00:42:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:08.738 Cannot find device "nvmf_tgt_br" 00:08:08.738 00:42:21 -- nvmf/common.sh@157 -- # true 00:08:08.738 00:42:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:08.738 Cannot find device "nvmf_tgt_br2" 00:08:08.738 00:42:21 -- nvmf/common.sh@158 -- # true 00:08:08.738 00:42:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:08.738 00:42:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:08.738 00:42:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.738 00:42:21 -- nvmf/common.sh@161 -- # true 00:08:08.738 00:42:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.738 00:42:21 -- nvmf/common.sh@162 -- # true 00:08:08.738 00:42:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.738 00:42:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.738 00:42:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.738 00:42:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.739 00:42:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.739 00:42:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.739 00:42:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.739 00:42:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:08.739 00:42:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:08.996 00:42:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:08.996 00:42:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:08.996 00:42:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:08.996 00:42:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:08.996 00:42:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.996 00:42:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.996 00:42:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.996 00:42:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:08.996 00:42:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:08.996 00:42:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.996 00:42:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.996 00:42:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.996 00:42:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.996 00:42:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.996 00:42:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:08.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:08.996 00:08:08.996 --- 10.0.0.2 ping statistics --- 00:08:08.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.996 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:08.996 00:42:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:08.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:08.996 00:08:08.996 --- 10.0.0.3 ping statistics --- 00:08:08.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.996 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:08.996 00:42:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:08.996 00:08:08.997 --- 10.0.0.1 ping statistics --- 00:08:08.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.997 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:08.997 00:42:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.997 00:42:21 -- nvmf/common.sh@421 -- # return 0 00:08:08.997 00:42:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:08.997 00:42:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.997 00:42:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:08.997 00:42:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:08.997 00:42:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.997 00:42:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:08.997 00:42:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:08.997 00:42:21 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:08.997 00:42:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:08.997 00:42:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.997 00:42:21 -- common/autotest_common.sh@10 -- # set +x 00:08:08.997 00:42:21 -- nvmf/common.sh@469 -- # nvmfpid=73443 00:08:08.997 00:42:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.997 00:42:21 -- nvmf/common.sh@470 -- # waitforlisten 73443 00:08:08.997 00:42:21 -- common/autotest_common.sh@829 -- # '[' -z 73443 ']' 00:08:08.997 00:42:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.997 00:42:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.997 00:42:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.997 00:42:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.997 00:42:21 -- common/autotest_common.sh@10 -- # set +x 00:08:08.997 [2024-12-03 00:42:21.457992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.997 [2024-12-03 00:42:21.458091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.255 [2024-12-03 00:42:21.600651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.255 [2024-12-03 00:42:21.672329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.255 [2024-12-03 00:42:21.673062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.255 [2024-12-03 00:42:21.673266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.255 [2024-12-03 00:42:21.673548] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.255 [2024-12-03 00:42:21.673881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.255 [2024-12-03 00:42:21.674052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.255 [2024-12-03 00:42:21.674626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.255 [2024-12-03 00:42:21.674641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.193 00:42:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.193 00:42:22 -- common/autotest_common.sh@862 -- # return 0 00:08:10.193 00:42:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:10.193 00:42:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.193 00:42:22 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 [2024-12-03 00:42:22.545099] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@26 -- # seq 1 4 00:08:10.193 00:42:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.193 00:42:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 Null1 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 [2024-12-03 00:42:22.600475] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.193 00:42:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 Null2 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.193 00:42:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 Null3 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.193 00:42:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 Null4 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:10.193 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.193 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.193 00:42:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:10.194 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.194 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.194 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.194 00:42:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:10.194 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.194 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.194 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.194 00:42:22 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.194 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.194 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.453 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.453 00:42:22 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:10.453 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.453 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.453 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.453 00:42:22 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 4420 00:08:10.453 00:08:10.453 Discovery Log Number of Records 6, Generation counter 6 00:08:10.453 =====Discovery Log Entry 0====== 00:08:10.453 trtype: tcp 00:08:10.453 adrfam: ipv4 00:08:10.453 subtype: current discovery subsystem 00:08:10.453 treq: not required 00:08:10.453 portid: 0 00:08:10.453 trsvcid: 4420 00:08:10.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:10.453 traddr: 10.0.0.2 00:08:10.453 eflags: explicit discovery connections, duplicate discovery information 00:08:10.453 sectype: none 00:08:10.453 =====Discovery Log Entry 1====== 00:08:10.453 trtype: tcp 00:08:10.453 adrfam: ipv4 00:08:10.453 subtype: nvme subsystem 00:08:10.453 treq: not required 00:08:10.453 portid: 0 00:08:10.453 trsvcid: 4420 00:08:10.453 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:10.453 traddr: 10.0.0.2 00:08:10.453 eflags: none 00:08:10.453 sectype: none 00:08:10.453 =====Discovery Log Entry 2====== 00:08:10.453 trtype: tcp 00:08:10.453 adrfam: ipv4 00:08:10.453 subtype: nvme subsystem 00:08:10.453 treq: not required 00:08:10.453 portid: 0 00:08:10.453 trsvcid: 4420 00:08:10.453 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:10.453 traddr: 10.0.0.2 00:08:10.453 eflags: none 00:08:10.453 sectype: none 00:08:10.453 =====Discovery Log Entry 3====== 00:08:10.453 trtype: tcp 00:08:10.453 adrfam: ipv4 00:08:10.453 subtype: nvme subsystem 00:08:10.453 treq: not required 00:08:10.453 portid: 0 00:08:10.453 trsvcid: 4420 00:08:10.453 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:10.453 traddr: 10.0.0.2 00:08:10.453 eflags: none 00:08:10.453 sectype: none 00:08:10.453 =====Discovery Log Entry 4====== 00:08:10.453 trtype: tcp 00:08:10.453 adrfam: ipv4 00:08:10.453 subtype: nvme subsystem 00:08:10.453 treq: not required 00:08:10.453 portid: 0 00:08:10.453 trsvcid: 4420 00:08:10.453 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:10.453 traddr: 10.0.0.2 00:08:10.453 eflags: none 00:08:10.453 sectype: none 00:08:10.453 =====Discovery Log Entry 5====== 00:08:10.453 trtype: tcp 00:08:10.453 adrfam: ipv4 00:08:10.453 subtype: discovery subsystem referral 00:08:10.453 treq: not required 00:08:10.453 portid: 0 00:08:10.453 trsvcid: 4430 00:08:10.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:10.453 traddr: 10.0.0.2 00:08:10.453 eflags: none 00:08:10.453 sectype: none 00:08:10.453 Perform nvmf subsystem discovery via RPC 00:08:10.453 00:42:22 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:10.453 00:42:22 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:10.453 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.453 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.453 [2024-12-03 00:42:22.828532] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:10.453 [ 00:08:10.453 { 00:08:10.453 "allow_any_host": true, 00:08:10.453 "hosts": [], 00:08:10.453 "listen_addresses": [ 00:08:10.453 { 00:08:10.453 "adrfam": "IPv4", 00:08:10.453 "traddr": "10.0.0.2", 00:08:10.453 "transport": "TCP", 00:08:10.453 "trsvcid": "4420", 00:08:10.453 "trtype": "TCP" 00:08:10.453 } 00:08:10.453 ], 00:08:10.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:10.453 "subtype": "Discovery" 00:08:10.453 }, 00:08:10.453 { 00:08:10.453 "allow_any_host": true, 00:08:10.453 "hosts": [], 00:08:10.453 "listen_addresses": [ 00:08:10.453 { 00:08:10.453 "adrfam": "IPv4", 00:08:10.453 "traddr": "10.0.0.2", 00:08:10.453 "transport": "TCP", 00:08:10.453 "trsvcid": "4420", 00:08:10.453 "trtype": "TCP" 00:08:10.453 } 00:08:10.453 ], 00:08:10.453 "max_cntlid": 65519, 00:08:10.453 "max_namespaces": 32, 00:08:10.453 "min_cntlid": 1, 00:08:10.453 "model_number": "SPDK bdev Controller", 00:08:10.453 "namespaces": [ 00:08:10.453 { 00:08:10.453 "bdev_name": "Null1", 00:08:10.453 "name": "Null1", 00:08:10.453 "nguid": "383DE299870040FFAB274C5449CA2BC9", 00:08:10.453 "nsid": 1, 00:08:10.453 "uuid": "383de299-8700-40ff-ab27-4c5449ca2bc9" 00:08:10.453 } 00:08:10.453 ], 00:08:10.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.453 "serial_number": "SPDK00000000000001", 00:08:10.454 "subtype": "NVMe" 00:08:10.454 }, 00:08:10.454 { 00:08:10.454 "allow_any_host": true, 00:08:10.454 "hosts": [], 00:08:10.454 "listen_addresses": [ 00:08:10.454 { 00:08:10.454 "adrfam": "IPv4", 00:08:10.454 "traddr": "10.0.0.2", 00:08:10.454 "transport": "TCP", 00:08:10.454 "trsvcid": "4420", 00:08:10.454 "trtype": "TCP" 00:08:10.454 } 00:08:10.454 ], 00:08:10.454 "max_cntlid": 65519, 00:08:10.454 "max_namespaces": 32, 00:08:10.454 "min_cntlid": 1, 00:08:10.454 "model_number": "SPDK bdev Controller", 00:08:10.454 "namespaces": [ 00:08:10.454 { 00:08:10.454 "bdev_name": "Null2", 00:08:10.454 "name": "Null2", 00:08:10.454 "nguid": "148EC11703F143D591177EB1815913DF", 00:08:10.454 "nsid": 1, 00:08:10.454 "uuid": "148ec117-03f1-43d5-9117-7eb1815913df" 00:08:10.454 } 00:08:10.454 ], 00:08:10.454 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:10.454 "serial_number": "SPDK00000000000002", 00:08:10.454 "subtype": "NVMe" 00:08:10.454 }, 00:08:10.454 { 00:08:10.454 "allow_any_host": true, 00:08:10.454 "hosts": [], 00:08:10.454 "listen_addresses": [ 00:08:10.454 { 00:08:10.454 "adrfam": "IPv4", 00:08:10.454 "traddr": "10.0.0.2", 00:08:10.454 "transport": "TCP", 00:08:10.454 "trsvcid": "4420", 00:08:10.454 "trtype": "TCP" 00:08:10.454 } 00:08:10.454 ], 00:08:10.454 "max_cntlid": 65519, 00:08:10.454 "max_namespaces": 32, 00:08:10.454 "min_cntlid": 1, 00:08:10.454 "model_number": "SPDK bdev Controller", 00:08:10.454 "namespaces": [ 00:08:10.454 { 00:08:10.454 "bdev_name": "Null3", 00:08:10.454 "name": "Null3", 00:08:10.454 "nguid": "5173AD2C8DC14A9986C5737C02E31185", 00:08:10.454 "nsid": 1, 00:08:10.454 "uuid": "5173ad2c-8dc1-4a99-86c5-737c02e31185" 00:08:10.454 } 00:08:10.454 ], 00:08:10.454 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:10.454 "serial_number": "SPDK00000000000003", 00:08:10.454 "subtype": "NVMe" 00:08:10.454 }, 00:08:10.454 { 00:08:10.454 "allow_any_host": true, 00:08:10.454 "hosts": [], 00:08:10.454 "listen_addresses": [ 00:08:10.454 { 00:08:10.454 "adrfam": "IPv4", 00:08:10.454 "traddr": "10.0.0.2", 00:08:10.454 "transport": "TCP", 00:08:10.454 "trsvcid": "4420", 00:08:10.454 "trtype": "TCP" 00:08:10.454 } 00:08:10.454 ], 00:08:10.454 "max_cntlid": 65519, 00:08:10.454 "max_namespaces": 32, 00:08:10.454 "min_cntlid": 1, 00:08:10.454 "model_number": "SPDK bdev Controller", 00:08:10.454 "namespaces": [ 00:08:10.454 { 00:08:10.454 "bdev_name": "Null4", 00:08:10.454 "name": "Null4", 00:08:10.454 "nguid": "4E0A7FABF69645EB864F4E3377FBF1A2", 00:08:10.454 "nsid": 1, 00:08:10.454 "uuid": "4e0a7fab-f696-45eb-864f-4e3377fbf1a2" 00:08:10.454 } 00:08:10.454 ], 00:08:10.454 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:10.454 "serial_number": "SPDK00000000000004", 00:08:10.454 "subtype": "NVMe" 00:08:10.454 } 00:08:10.454 ] 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@42 -- # seq 1 4 00:08:10.454 00:42:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.454 00:42:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.454 00:42:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.454 00:42:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.454 00:42:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 00:42:22 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:10.454 00:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 00:42:22 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:10.454 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 00:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.713 00:42:22 -- target/discovery.sh@49 -- # check_bdevs= 00:08:10.713 00:42:22 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:10.713 00:42:22 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:10.713 00:42:22 -- target/discovery.sh@57 -- # nvmftestfini 00:08:10.713 00:42:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:10.713 00:42:22 -- nvmf/common.sh@116 -- # sync 00:08:10.713 00:42:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:10.713 00:42:23 -- nvmf/common.sh@119 -- # set +e 00:08:10.713 00:42:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:10.713 00:42:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:10.713 rmmod nvme_tcp 00:08:10.713 rmmod nvme_fabrics 00:08:10.713 rmmod nvme_keyring 00:08:10.713 00:42:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:10.713 00:42:23 -- nvmf/common.sh@123 -- # set -e 00:08:10.713 00:42:23 -- nvmf/common.sh@124 -- # return 0 00:08:10.713 00:42:23 -- nvmf/common.sh@477 -- # '[' -n 73443 ']' 00:08:10.713 00:42:23 -- nvmf/common.sh@478 -- # killprocess 73443 00:08:10.713 00:42:23 -- common/autotest_common.sh@936 -- # '[' -z 73443 ']' 00:08:10.713 00:42:23 -- common/autotest_common.sh@940 -- # kill -0 73443 00:08:10.713 00:42:23 -- common/autotest_common.sh@941 -- # uname 00:08:10.713 00:42:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.713 00:42:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73443 00:08:10.713 00:42:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:10.713 killing process with pid 73443 00:08:10.713 00:42:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:10.713 00:42:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73443' 00:08:10.713 00:42:23 -- common/autotest_common.sh@955 -- # kill 73443 00:08:10.713 [2024-12-03 00:42:23.104842] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:10.713 00:42:23 -- common/autotest_common.sh@960 -- # wait 73443 00:08:10.972 00:42:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:10.972 00:42:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:10.972 00:42:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:10.972 00:42:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.972 00:42:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:10.972 00:42:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.972 00:42:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.972 00:42:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.972 00:42:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:10.972 ************************************ 00:08:10.972 END TEST nvmf_discovery 00:08:10.972 ************************************ 00:08:10.972 00:08:10.972 real 0m2.608s 00:08:10.972 user 0m7.131s 00:08:10.972 sys 0m0.690s 00:08:10.972 00:42:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.972 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 00:42:23 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.972 00:42:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:10.972 00:42:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.972 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 ************************************ 00:08:10.972 START TEST nvmf_referrals 00:08:10.972 ************************************ 00:08:10.972 00:42:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.231 * Looking for test storage... 00:08:11.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.231 00:42:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.231 00:42:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.231 00:42:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.231 00:42:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.231 00:42:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.231 00:42:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.231 00:42:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.231 00:42:23 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.231 00:42:23 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.231 00:42:23 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.231 00:42:23 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.231 00:42:23 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.231 00:42:23 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.231 00:42:23 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.231 00:42:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.231 00:42:23 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.231 00:42:23 -- scripts/common.sh@344 -- # : 1 00:08:11.231 00:42:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.231 00:42:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.231 00:42:23 -- scripts/common.sh@364 -- # decimal 1 00:08:11.231 00:42:23 -- scripts/common.sh@352 -- # local d=1 00:08:11.231 00:42:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.231 00:42:23 -- scripts/common.sh@354 -- # echo 1 00:08:11.231 00:42:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.231 00:42:23 -- scripts/common.sh@365 -- # decimal 2 00:08:11.231 00:42:23 -- scripts/common.sh@352 -- # local d=2 00:08:11.231 00:42:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.231 00:42:23 -- scripts/common.sh@354 -- # echo 2 00:08:11.231 00:42:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.231 00:42:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.231 00:42:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.231 00:42:23 -- scripts/common.sh@367 -- # return 0 00:08:11.231 00:42:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.231 00:42:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.231 --rc genhtml_branch_coverage=1 00:08:11.231 --rc genhtml_function_coverage=1 00:08:11.231 --rc genhtml_legend=1 00:08:11.231 --rc geninfo_all_blocks=1 00:08:11.231 --rc geninfo_unexecuted_blocks=1 00:08:11.231 00:08:11.231 ' 00:08:11.231 00:42:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.231 --rc genhtml_branch_coverage=1 00:08:11.231 --rc genhtml_function_coverage=1 00:08:11.231 --rc genhtml_legend=1 00:08:11.231 --rc geninfo_all_blocks=1 00:08:11.231 --rc geninfo_unexecuted_blocks=1 00:08:11.231 00:08:11.232 ' 00:08:11.232 00:42:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.232 --rc genhtml_branch_coverage=1 00:08:11.232 --rc genhtml_function_coverage=1 00:08:11.232 --rc genhtml_legend=1 00:08:11.232 --rc geninfo_all_blocks=1 00:08:11.232 --rc geninfo_unexecuted_blocks=1 00:08:11.232 00:08:11.232 ' 00:08:11.232 00:42:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.232 --rc genhtml_branch_coverage=1 00:08:11.232 --rc genhtml_function_coverage=1 00:08:11.232 --rc genhtml_legend=1 00:08:11.232 --rc geninfo_all_blocks=1 00:08:11.232 --rc geninfo_unexecuted_blocks=1 00:08:11.232 00:08:11.232 ' 00:08:11.232 00:42:23 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.232 00:42:23 -- nvmf/common.sh@7 -- # uname -s 00:08:11.232 00:42:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.232 00:42:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.232 00:42:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.232 00:42:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.232 00:42:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.232 00:42:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.232 00:42:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.232 00:42:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.232 00:42:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.232 00:42:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.232 00:42:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:08:11.232 00:42:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:08:11.232 00:42:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.232 00:42:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.232 00:42:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.232 00:42:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.232 00:42:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.232 00:42:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.232 00:42:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.232 00:42:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.232 00:42:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.232 00:42:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.232 00:42:23 -- paths/export.sh@5 -- # export PATH 00:08:11.232 00:42:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.232 00:42:23 -- nvmf/common.sh@46 -- # : 0 00:08:11.232 00:42:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.232 00:42:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.232 00:42:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.232 00:42:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.232 00:42:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.232 00:42:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.232 00:42:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.232 00:42:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.232 00:42:23 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:11.232 00:42:23 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:11.232 00:42:23 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:11.232 00:42:23 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:11.232 00:42:23 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:11.232 00:42:23 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:11.232 00:42:23 -- target/referrals.sh@37 -- # nvmftestinit 00:08:11.232 00:42:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:11.232 00:42:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.232 00:42:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.232 00:42:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.232 00:42:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.232 00:42:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.232 00:42:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.232 00:42:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.232 00:42:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:11.232 00:42:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:11.232 00:42:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:11.232 00:42:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:11.232 00:42:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:11.232 00:42:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:11.232 00:42:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.232 00:42:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.232 00:42:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.232 00:42:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:11.232 00:42:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.232 00:42:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.232 00:42:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.232 00:42:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.232 00:42:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.232 00:42:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.232 00:42:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.232 00:42:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.232 00:42:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:11.232 00:42:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:11.232 Cannot find device "nvmf_tgt_br" 00:08:11.232 00:42:23 -- nvmf/common.sh@154 -- # true 00:08:11.232 00:42:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.232 Cannot find device "nvmf_tgt_br2" 00:08:11.232 00:42:23 -- nvmf/common.sh@155 -- # true 00:08:11.232 00:42:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:11.232 00:42:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:11.232 Cannot find device "nvmf_tgt_br" 00:08:11.232 00:42:23 -- nvmf/common.sh@157 -- # true 00:08:11.232 00:42:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:11.491 Cannot find device "nvmf_tgt_br2" 00:08:11.491 00:42:23 -- nvmf/common.sh@158 -- # true 00:08:11.491 00:42:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:11.491 00:42:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:11.491 00:42:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.491 00:42:23 -- nvmf/common.sh@161 -- # true 00:08:11.491 00:42:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.491 00:42:23 -- nvmf/common.sh@162 -- # true 00:08:11.491 00:42:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.491 00:42:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.491 00:42:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.491 00:42:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.491 00:42:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.491 00:42:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.491 00:42:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.491 00:42:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.491 00:42:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.491 00:42:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:11.491 00:42:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:11.492 00:42:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:11.492 00:42:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:11.492 00:42:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.492 00:42:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.492 00:42:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.492 00:42:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:11.492 00:42:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:11.492 00:42:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.492 00:42:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.492 00:42:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.492 00:42:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.492 00:42:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.492 00:42:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:11.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:11.492 00:08:11.492 --- 10.0.0.2 ping statistics --- 00:08:11.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.492 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:11.492 00:42:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:11.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:11.492 00:08:11.492 --- 10.0.0.3 ping statistics --- 00:08:11.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.492 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:11.492 00:42:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:11.492 00:08:11.492 --- 10.0.0.1 ping statistics --- 00:08:11.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.492 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:11.492 00:42:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.492 00:42:23 -- nvmf/common.sh@421 -- # return 0 00:08:11.492 00:42:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.492 00:42:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.492 00:42:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.492 00:42:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.492 00:42:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.492 00:42:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.492 00:42:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.492 00:42:23 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:11.492 00:42:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.492 00:42:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.492 00:42:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.751 00:42:24 -- nvmf/common.sh@469 -- # nvmfpid=73676 00:08:11.751 00:42:24 -- nvmf/common.sh@470 -- # waitforlisten 73676 00:08:11.751 00:42:24 -- common/autotest_common.sh@829 -- # '[' -z 73676 ']' 00:08:11.751 00:42:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.751 00:42:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.751 00:42:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.751 00:42:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.751 00:42:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.751 00:42:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.751 [2024-12-03 00:42:24.066625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.751 [2024-12-03 00:42:24.066913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.751 [2024-12-03 00:42:24.206609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.009 [2024-12-03 00:42:24.279618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.009 [2024-12-03 00:42:24.280094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.009 [2024-12-03 00:42:24.280116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.009 [2024-12-03 00:42:24.280126] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.009 [2024-12-03 00:42:24.280296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.009 [2024-12-03 00:42:24.280491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.009 [2024-12-03 00:42:24.280578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.009 [2024-12-03 00:42:24.280665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.577 00:42:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.577 00:42:24 -- common/autotest_common.sh@862 -- # return 0 00:08:12.577 00:42:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.577 00:42:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.577 00:42:24 -- common/autotest_common.sh@10 -- # set +x 00:08:12.577 00:42:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.577 00:42:25 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.577 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.577 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.577 [2024-12-03 00:42:25.050340] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.577 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.577 00:42:25 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:12.577 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.577 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.577 [2024-12-03 00:42:25.081934] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:12.577 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.577 00:42:25 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.577 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.577 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.835 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.835 00:42:25 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.835 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.835 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.835 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.835 00:42:25 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.835 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.835 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.835 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.835 00:42:25 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.835 00:42:25 -- target/referrals.sh@48 -- # jq length 00:08:12.835 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.835 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.835 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.835 00:42:25 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:12.835 00:42:25 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:12.836 00:42:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.836 00:42:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.836 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.836 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 00:42:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.836 00:42:25 -- target/referrals.sh@21 -- # sort 00:08:12.836 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.836 00:42:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.836 00:42:25 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.836 00:42:25 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:12.836 00:42:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.836 00:42:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.836 00:42:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.836 00:42:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.836 00:42:25 -- target/referrals.sh@26 -- # sort 00:08:12.836 00:42:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.836 00:42:25 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.836 00:42:25 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.836 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.836 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.095 00:42:25 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.095 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.095 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.095 00:42:25 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.095 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.095 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.095 00:42:25 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.095 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.095 00:42:25 -- target/referrals.sh@56 -- # jq length 00:08:13.095 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.095 00:42:25 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:13.095 00:42:25 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:13.095 00:42:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.095 00:42:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.095 00:42:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.095 00:42:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.095 00:42:25 -- target/referrals.sh@26 -- # sort 00:08:13.095 00:42:25 -- target/referrals.sh@26 -- # echo 00:08:13.095 00:42:25 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:13.095 00:42:25 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:13.095 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.095 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.095 00:42:25 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.095 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.095 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.095 00:42:25 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:13.095 00:42:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.095 00:42:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.095 00:42:25 -- target/referrals.sh@21 -- # sort 00:08:13.095 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.095 00:42:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.095 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.354 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.354 00:42:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:13.354 00:42:25 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.354 00:42:25 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:13.354 00:42:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.354 00:42:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.354 00:42:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.354 00:42:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.354 00:42:25 -- target/referrals.sh@26 -- # sort 00:08:13.354 00:42:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:13.354 00:42:25 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.354 00:42:25 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:13.354 00:42:25 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:13.354 00:42:25 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.354 00:42:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.354 00:42:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.613 00:42:25 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:13.613 00:42:25 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.613 00:42:25 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:13.613 00:42:25 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.613 00:42:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.613 00:42:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.613 00:42:25 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.613 00:42:25 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.613 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.613 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.613 00:42:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.613 00:42:25 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:13.613 00:42:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.613 00:42:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.613 00:42:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.613 00:42:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.613 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.613 00:42:25 -- target/referrals.sh@21 -- # sort 00:08:13.613 00:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.613 00:42:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:13.613 00:42:26 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.613 00:42:26 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:13.613 00:42:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.613 00:42:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.613 00:42:26 -- target/referrals.sh@26 -- # sort 00:08:13.613 00:42:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.613 00:42:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.873 00:42:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:13.873 00:42:26 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.873 00:42:26 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:13.873 00:42:26 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.873 00:42:26 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:13.873 00:42:26 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.873 00:42:26 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.873 00:42:26 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:13.873 00:42:26 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.873 00:42:26 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:13.873 00:42:26 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.873 00:42:26 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.873 00:42:26 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.132 00:42:26 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.132 00:42:26 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:14.132 00:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.132 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:14.132 00:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.132 00:42:26 -- target/referrals.sh@82 -- # jq length 00:08:14.132 00:42:26 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.132 00:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.132 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:14.132 00:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.132 00:42:26 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:14.132 00:42:26 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:14.132 00:42:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.132 00:42:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.132 00:42:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.132 00:42:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.132 00:42:26 -- target/referrals.sh@26 -- # sort 00:08:14.132 00:42:26 -- target/referrals.sh@26 -- # echo 00:08:14.132 00:42:26 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:14.132 00:42:26 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:14.132 00:42:26 -- target/referrals.sh@86 -- # nvmftestfini 00:08:14.132 00:42:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:14.132 00:42:26 -- nvmf/common.sh@116 -- # sync 00:08:14.392 00:42:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.392 00:42:26 -- nvmf/common.sh@119 -- # set +e 00:08:14.392 00:42:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.392 00:42:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.392 rmmod nvme_tcp 00:08:14.392 rmmod nvme_fabrics 00:08:14.392 rmmod nvme_keyring 00:08:14.392 00:42:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.392 00:42:26 -- nvmf/common.sh@123 -- # set -e 00:08:14.392 00:42:26 -- nvmf/common.sh@124 -- # return 0 00:08:14.392 00:42:26 -- nvmf/common.sh@477 -- # '[' -n 73676 ']' 00:08:14.392 00:42:26 -- nvmf/common.sh@478 -- # killprocess 73676 00:08:14.392 00:42:26 -- common/autotest_common.sh@936 -- # '[' -z 73676 ']' 00:08:14.392 00:42:26 -- common/autotest_common.sh@940 -- # kill -0 73676 00:08:14.392 00:42:26 -- common/autotest_common.sh@941 -- # uname 00:08:14.392 00:42:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.392 00:42:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73676 00:08:14.392 killing process with pid 73676 00:08:14.392 00:42:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.392 00:42:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.392 00:42:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73676' 00:08:14.392 00:42:26 -- common/autotest_common.sh@955 -- # kill 73676 00:08:14.392 00:42:26 -- common/autotest_common.sh@960 -- # wait 73676 00:08:14.651 00:42:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:14.651 00:42:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:14.651 00:42:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:14.651 00:42:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.651 00:42:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:14.651 00:42:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.651 00:42:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.651 00:42:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.651 00:42:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:14.651 00:08:14.651 real 0m3.557s 00:08:14.651 user 0m11.722s 00:08:14.651 sys 0m0.961s 00:08:14.651 00:42:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.651 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.651 ************************************ 00:08:14.651 END TEST nvmf_referrals 00:08:14.651 ************************************ 00:08:14.651 00:42:27 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.651 00:42:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.651 00:42:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.651 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.651 ************************************ 00:08:14.651 START TEST nvmf_connect_disconnect 00:08:14.651 ************************************ 00:08:14.651 00:42:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.651 * Looking for test storage... 00:08:14.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.651 00:42:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.651 00:42:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.651 00:42:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.911 00:42:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.911 00:42:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.911 00:42:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.911 00:42:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.911 00:42:27 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.911 00:42:27 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.911 00:42:27 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.911 00:42:27 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.911 00:42:27 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.911 00:42:27 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.911 00:42:27 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.911 00:42:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.911 00:42:27 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.911 00:42:27 -- scripts/common.sh@344 -- # : 1 00:08:14.911 00:42:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.911 00:42:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.911 00:42:27 -- scripts/common.sh@364 -- # decimal 1 00:08:14.911 00:42:27 -- scripts/common.sh@352 -- # local d=1 00:08:14.911 00:42:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.911 00:42:27 -- scripts/common.sh@354 -- # echo 1 00:08:14.911 00:42:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.911 00:42:27 -- scripts/common.sh@365 -- # decimal 2 00:08:14.911 00:42:27 -- scripts/common.sh@352 -- # local d=2 00:08:14.911 00:42:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.911 00:42:27 -- scripts/common.sh@354 -- # echo 2 00:08:14.911 00:42:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.911 00:42:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.911 00:42:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.911 00:42:27 -- scripts/common.sh@367 -- # return 0 00:08:14.911 00:42:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.911 00:42:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.911 --rc genhtml_branch_coverage=1 00:08:14.911 --rc genhtml_function_coverage=1 00:08:14.911 --rc genhtml_legend=1 00:08:14.911 --rc geninfo_all_blocks=1 00:08:14.911 --rc geninfo_unexecuted_blocks=1 00:08:14.911 00:08:14.911 ' 00:08:14.911 00:42:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.911 --rc genhtml_branch_coverage=1 00:08:14.911 --rc genhtml_function_coverage=1 00:08:14.911 --rc genhtml_legend=1 00:08:14.911 --rc geninfo_all_blocks=1 00:08:14.911 --rc geninfo_unexecuted_blocks=1 00:08:14.911 00:08:14.911 ' 00:08:14.911 00:42:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.911 --rc genhtml_branch_coverage=1 00:08:14.911 --rc genhtml_function_coverage=1 00:08:14.911 --rc genhtml_legend=1 00:08:14.911 --rc geninfo_all_blocks=1 00:08:14.911 --rc geninfo_unexecuted_blocks=1 00:08:14.911 00:08:14.911 ' 00:08:14.911 00:42:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.911 --rc genhtml_branch_coverage=1 00:08:14.911 --rc genhtml_function_coverage=1 00:08:14.911 --rc genhtml_legend=1 00:08:14.911 --rc geninfo_all_blocks=1 00:08:14.911 --rc geninfo_unexecuted_blocks=1 00:08:14.911 00:08:14.911 ' 00:08:14.911 00:42:27 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.911 00:42:27 -- nvmf/common.sh@7 -- # uname -s 00:08:14.911 00:42:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.911 00:42:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.911 00:42:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.911 00:42:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.911 00:42:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.911 00:42:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.911 00:42:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.911 00:42:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.911 00:42:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.911 00:42:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.911 00:42:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:08:14.911 00:42:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:08:14.911 00:42:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.911 00:42:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.911 00:42:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.911 00:42:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.911 00:42:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.911 00:42:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.911 00:42:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.911 00:42:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.911 00:42:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.911 00:42:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.911 00:42:27 -- paths/export.sh@5 -- # export PATH 00:08:14.911 00:42:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.911 00:42:27 -- nvmf/common.sh@46 -- # : 0 00:08:14.911 00:42:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.911 00:42:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.911 00:42:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.911 00:42:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.912 00:42:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.912 00:42:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.912 00:42:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.912 00:42:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.912 00:42:27 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.912 00:42:27 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.912 00:42:27 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:14.912 00:42:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:14.912 00:42:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.912 00:42:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:14.912 00:42:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:14.912 00:42:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:14.912 00:42:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.912 00:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.912 00:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.912 00:42:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:14.912 00:42:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:14.912 00:42:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:14.912 00:42:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:14.912 00:42:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:14.912 00:42:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:14.912 00:42:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.912 00:42:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.912 00:42:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.912 00:42:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:14.912 00:42:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.912 00:42:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.912 00:42:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.912 00:42:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.912 00:42:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.912 00:42:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.912 00:42:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.912 00:42:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.912 00:42:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:14.912 00:42:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:14.912 Cannot find device "nvmf_tgt_br" 00:08:14.912 00:42:27 -- nvmf/common.sh@154 -- # true 00:08:14.912 00:42:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.912 Cannot find device "nvmf_tgt_br2" 00:08:14.912 00:42:27 -- nvmf/common.sh@155 -- # true 00:08:14.912 00:42:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:14.912 00:42:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:14.912 Cannot find device "nvmf_tgt_br" 00:08:14.912 00:42:27 -- nvmf/common.sh@157 -- # true 00:08:14.912 00:42:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:14.912 Cannot find device "nvmf_tgt_br2" 00:08:14.912 00:42:27 -- nvmf/common.sh@158 -- # true 00:08:14.912 00:42:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:14.912 00:42:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:14.912 00:42:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.170 00:42:27 -- nvmf/common.sh@161 -- # true 00:08:15.170 00:42:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.170 00:42:27 -- nvmf/common.sh@162 -- # true 00:08:15.170 00:42:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.170 00:42:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.170 00:42:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.170 00:42:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.170 00:42:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.170 00:42:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.170 00:42:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.170 00:42:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.170 00:42:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.170 00:42:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:15.170 00:42:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:15.170 00:42:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:15.170 00:42:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:15.170 00:42:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.170 00:42:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.170 00:42:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.170 00:42:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:15.170 00:42:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:15.170 00:42:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.170 00:42:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.170 00:42:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.170 00:42:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.170 00:42:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.170 00:42:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:15.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:15.170 00:08:15.170 --- 10.0.0.2 ping statistics --- 00:08:15.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.170 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:15.170 00:42:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:15.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:15.170 00:08:15.170 --- 10.0.0.3 ping statistics --- 00:08:15.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.170 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:15.170 00:42:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:15.170 00:08:15.170 --- 10.0.0.1 ping statistics --- 00:08:15.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.170 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:15.170 00:42:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.170 00:42:27 -- nvmf/common.sh@421 -- # return 0 00:08:15.171 00:42:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.171 00:42:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.171 00:42:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:15.171 00:42:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:15.171 00:42:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.171 00:42:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:15.171 00:42:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:15.171 00:42:27 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:15.171 00:42:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.171 00:42:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.171 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:15.171 00:42:27 -- nvmf/common.sh@469 -- # nvmfpid=74000 00:08:15.171 00:42:27 -- nvmf/common.sh@470 -- # waitforlisten 74000 00:08:15.171 00:42:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.171 00:42:27 -- common/autotest_common.sh@829 -- # '[' -z 74000 ']' 00:08:15.171 00:42:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.171 00:42:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.171 00:42:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.171 00:42:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.171 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:15.428 [2024-12-03 00:42:27.721207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.428 [2024-12-03 00:42:27.721388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.428 [2024-12-03 00:42:27.854598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.428 [2024-12-03 00:42:27.922493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.428 [2024-12-03 00:42:27.922969] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.428 [2024-12-03 00:42:27.923017] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.428 [2024-12-03 00:42:27.923227] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.428 [2024-12-03 00:42:27.923437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.428 [2024-12-03 00:42:27.923514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.428 [2024-12-03 00:42:27.923609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.428 [2024-12-03 00:42:27.923615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.362 00:42:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.362 00:42:28 -- common/autotest_common.sh@862 -- # return 0 00:08:16.362 00:42:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:16.362 00:42:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.362 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 00:42:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:16.362 00:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 [2024-12-03 00:42:28.731033] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.362 00:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.362 00:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 00:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.362 00:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 00:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.362 00:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 00:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.362 00:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 [2024-12-03 00:42:28.807559] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.362 00:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:16.362 00:42:28 -- target/connect_disconnect.sh@34 -- # set +x 00:08:18.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.365 00:46:14 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:02.365 00:46:14 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:02.365 00:46:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:02.365 00:46:14 -- nvmf/common.sh@116 -- # sync 00:12:02.365 00:46:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:02.365 00:46:14 -- nvmf/common.sh@119 -- # set +e 00:12:02.365 00:46:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:02.365 00:46:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:02.365 rmmod nvme_tcp 00:12:02.365 rmmod nvme_fabrics 00:12:02.365 rmmod nvme_keyring 00:12:02.365 00:46:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:02.365 00:46:14 -- nvmf/common.sh@123 -- # set -e 00:12:02.365 00:46:14 -- nvmf/common.sh@124 -- # return 0 00:12:02.365 00:46:14 -- nvmf/common.sh@477 -- # '[' -n 74000 ']' 00:12:02.365 00:46:14 -- nvmf/common.sh@478 -- # killprocess 74000 00:12:02.365 00:46:14 -- common/autotest_common.sh@936 -- # '[' -z 74000 ']' 00:12:02.365 00:46:14 -- common/autotest_common.sh@940 -- # kill -0 74000 00:12:02.365 00:46:14 -- common/autotest_common.sh@941 -- # uname 00:12:02.365 00:46:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.365 00:46:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74000 00:12:02.365 killing process with pid 74000 00:12:02.365 00:46:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:02.365 00:46:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:02.365 00:46:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74000' 00:12:02.365 00:46:14 -- common/autotest_common.sh@955 -- # kill 74000 00:12:02.365 00:46:14 -- common/autotest_common.sh@960 -- # wait 74000 00:12:02.365 00:46:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:02.365 00:46:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:02.365 00:46:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:02.365 00:46:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.365 00:46:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:02.365 00:46:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.365 00:46:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.365 00:46:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.365 00:46:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:02.365 00:12:02.365 real 3m47.748s 00:12:02.365 user 14m51.716s 00:12:02.365 sys 0m18.222s 00:12:02.365 00:46:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:02.365 00:46:14 -- common/autotest_common.sh@10 -- # set +x 00:12:02.365 ************************************ 00:12:02.365 END TEST nvmf_connect_disconnect 00:12:02.365 ************************************ 00:12:02.625 00:46:14 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.625 00:46:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:02.625 00:46:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.625 00:46:14 -- common/autotest_common.sh@10 -- # set +x 00:12:02.625 ************************************ 00:12:02.625 START TEST nvmf_multitarget 00:12:02.625 ************************************ 00:12:02.625 00:46:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.625 * Looking for test storage... 00:12:02.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.625 00:46:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:02.625 00:46:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:02.625 00:46:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:02.625 00:46:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:02.625 00:46:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:02.625 00:46:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:02.625 00:46:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:02.625 00:46:15 -- scripts/common.sh@335 -- # IFS=.-: 00:12:02.625 00:46:15 -- scripts/common.sh@335 -- # read -ra ver1 00:12:02.625 00:46:15 -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.625 00:46:15 -- scripts/common.sh@336 -- # read -ra ver2 00:12:02.625 00:46:15 -- scripts/common.sh@337 -- # local 'op=<' 00:12:02.625 00:46:15 -- scripts/common.sh@339 -- # ver1_l=2 00:12:02.625 00:46:15 -- scripts/common.sh@340 -- # ver2_l=1 00:12:02.625 00:46:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:02.625 00:46:15 -- scripts/common.sh@343 -- # case "$op" in 00:12:02.625 00:46:15 -- scripts/common.sh@344 -- # : 1 00:12:02.625 00:46:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:02.625 00:46:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.625 00:46:15 -- scripts/common.sh@364 -- # decimal 1 00:12:02.625 00:46:15 -- scripts/common.sh@352 -- # local d=1 00:12:02.625 00:46:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.625 00:46:15 -- scripts/common.sh@354 -- # echo 1 00:12:02.625 00:46:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:02.625 00:46:15 -- scripts/common.sh@365 -- # decimal 2 00:12:02.625 00:46:15 -- scripts/common.sh@352 -- # local d=2 00:12:02.625 00:46:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.625 00:46:15 -- scripts/common.sh@354 -- # echo 2 00:12:02.625 00:46:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:02.625 00:46:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.625 00:46:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:02.625 00:46:15 -- scripts/common.sh@367 -- # return 0 00:12:02.625 00:46:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.625 00:46:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.625 --rc genhtml_branch_coverage=1 00:12:02.625 --rc genhtml_function_coverage=1 00:12:02.625 --rc genhtml_legend=1 00:12:02.625 --rc geninfo_all_blocks=1 00:12:02.625 --rc geninfo_unexecuted_blocks=1 00:12:02.625 00:12:02.625 ' 00:12:02.625 00:46:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.625 --rc genhtml_branch_coverage=1 00:12:02.625 --rc genhtml_function_coverage=1 00:12:02.625 --rc genhtml_legend=1 00:12:02.625 --rc geninfo_all_blocks=1 00:12:02.625 --rc geninfo_unexecuted_blocks=1 00:12:02.625 00:12:02.625 ' 00:12:02.625 00:46:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.625 --rc genhtml_branch_coverage=1 00:12:02.625 --rc genhtml_function_coverage=1 00:12:02.625 --rc genhtml_legend=1 00:12:02.625 --rc geninfo_all_blocks=1 00:12:02.625 --rc geninfo_unexecuted_blocks=1 00:12:02.625 00:12:02.625 ' 00:12:02.625 00:46:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.625 --rc genhtml_branch_coverage=1 00:12:02.625 --rc genhtml_function_coverage=1 00:12:02.625 --rc genhtml_legend=1 00:12:02.625 --rc geninfo_all_blocks=1 00:12:02.625 --rc geninfo_unexecuted_blocks=1 00:12:02.625 00:12:02.625 ' 00:12:02.625 00:46:15 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.625 00:46:15 -- nvmf/common.sh@7 -- # uname -s 00:12:02.625 00:46:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.625 00:46:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.625 00:46:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.625 00:46:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.625 00:46:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.625 00:46:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.625 00:46:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.625 00:46:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.626 00:46:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.626 00:46:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.626 00:46:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:02.626 00:46:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:02.626 00:46:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.626 00:46:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.626 00:46:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.626 00:46:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.626 00:46:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.626 00:46:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.626 00:46:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.626 00:46:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.626 00:46:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.626 00:46:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.626 00:46:15 -- paths/export.sh@5 -- # export PATH 00:12:02.626 00:46:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.626 00:46:15 -- nvmf/common.sh@46 -- # : 0 00:12:02.626 00:46:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:02.626 00:46:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:02.626 00:46:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:02.626 00:46:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.626 00:46:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.626 00:46:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:02.626 00:46:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:02.626 00:46:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:02.626 00:46:15 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.626 00:46:15 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:02.626 00:46:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:02.626 00:46:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.626 00:46:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:02.626 00:46:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:02.626 00:46:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:02.626 00:46:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.626 00:46:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.626 00:46:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.626 00:46:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:02.626 00:46:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:02.626 00:46:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:02.626 00:46:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:02.626 00:46:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:02.626 00:46:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:02.626 00:46:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.626 00:46:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.626 00:46:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.626 00:46:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:02.626 00:46:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.626 00:46:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.626 00:46:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.626 00:46:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.626 00:46:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.626 00:46:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.626 00:46:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.626 00:46:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.626 00:46:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:02.626 00:46:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:02.626 Cannot find device "nvmf_tgt_br" 00:12:02.626 00:46:15 -- nvmf/common.sh@154 -- # true 00:12:02.626 00:46:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.626 Cannot find device "nvmf_tgt_br2" 00:12:02.626 00:46:15 -- nvmf/common.sh@155 -- # true 00:12:02.626 00:46:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:02.894 00:46:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:02.894 Cannot find device "nvmf_tgt_br" 00:12:02.894 00:46:15 -- nvmf/common.sh@157 -- # true 00:12:02.894 00:46:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:02.894 Cannot find device "nvmf_tgt_br2" 00:12:02.894 00:46:15 -- nvmf/common.sh@158 -- # true 00:12:02.894 00:46:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:02.894 00:46:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:02.894 00:46:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.894 00:46:15 -- nvmf/common.sh@161 -- # true 00:12:02.894 00:46:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.894 00:46:15 -- nvmf/common.sh@162 -- # true 00:12:02.894 00:46:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.894 00:46:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.894 00:46:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.894 00:46:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.894 00:46:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.894 00:46:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.894 00:46:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.894 00:46:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.894 00:46:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.894 00:46:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:02.894 00:46:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:02.894 00:46:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:02.894 00:46:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:02.894 00:46:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.894 00:46:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.894 00:46:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.895 00:46:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:02.895 00:46:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:02.895 00:46:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.895 00:46:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.895 00:46:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.895 00:46:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.895 00:46:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.895 00:46:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:02.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:02.895 00:12:02.895 --- 10.0.0.2 ping statistics --- 00:12:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.895 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:02.895 00:46:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:02.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:02.895 00:12:02.895 --- 10.0.0.3 ping statistics --- 00:12:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.895 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:02.895 00:46:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:02.895 00:12:02.895 --- 10.0.0.1 ping statistics --- 00:12:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.895 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:02.895 00:46:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.895 00:46:15 -- nvmf/common.sh@421 -- # return 0 00:12:02.895 00:46:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:02.895 00:46:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.895 00:46:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:02.895 00:46:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:02.895 00:46:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.895 00:46:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:02.895 00:46:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:03.157 00:46:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:03.157 00:46:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:03.157 00:46:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.157 00:46:15 -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 00:46:15 -- nvmf/common.sh@469 -- # nvmfpid=77806 00:12:03.157 00:46:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.157 00:46:15 -- nvmf/common.sh@470 -- # waitforlisten 77806 00:12:03.158 00:46:15 -- common/autotest_common.sh@829 -- # '[' -z 77806 ']' 00:12:03.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.158 00:46:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.158 00:46:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.158 00:46:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.158 00:46:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.158 00:46:15 -- common/autotest_common.sh@10 -- # set +x 00:12:03.158 [2024-12-03 00:46:15.464055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:03.158 [2024-12-03 00:46:15.464138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.158 [2024-12-03 00:46:15.607221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.417 [2024-12-03 00:46:15.701375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:03.417 [2024-12-03 00:46:15.701546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.417 [2024-12-03 00:46:15.701565] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.417 [2024-12-03 00:46:15.701574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.417 [2024-12-03 00:46:15.701690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.417 [2024-12-03 00:46:15.701995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.417 [2024-12-03 00:46:15.702663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.417 [2024-12-03 00:46:15.702792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.984 00:46:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.985 00:46:16 -- common/autotest_common.sh@862 -- # return 0 00:12:03.985 00:46:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:03.985 00:46:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:03.985 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:12:03.985 00:46:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.985 00:46:16 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:03.985 00:46:16 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.985 00:46:16 -- target/multitarget.sh@21 -- # jq length 00:12:04.243 00:46:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:04.243 00:46:16 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:04.243 "nvmf_tgt_1" 00:12:04.243 00:46:16 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:04.501 "nvmf_tgt_2" 00:12:04.501 00:46:16 -- target/multitarget.sh@28 -- # jq length 00:12:04.501 00:46:16 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.501 00:46:16 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:04.501 00:46:16 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:04.760 true 00:12:04.760 00:46:17 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:04.760 true 00:12:04.760 00:46:17 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.760 00:46:17 -- target/multitarget.sh@35 -- # jq length 00:12:05.019 00:46:17 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:05.019 00:46:17 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:05.019 00:46:17 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:05.019 00:46:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:05.019 00:46:17 -- nvmf/common.sh@116 -- # sync 00:12:05.019 00:46:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:05.019 00:46:17 -- nvmf/common.sh@119 -- # set +e 00:12:05.019 00:46:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:05.019 00:46:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:05.019 rmmod nvme_tcp 00:12:05.019 rmmod nvme_fabrics 00:12:05.019 rmmod nvme_keyring 00:12:05.019 00:46:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:05.019 00:46:17 -- nvmf/common.sh@123 -- # set -e 00:12:05.019 00:46:17 -- nvmf/common.sh@124 -- # return 0 00:12:05.019 00:46:17 -- nvmf/common.sh@477 -- # '[' -n 77806 ']' 00:12:05.019 00:46:17 -- nvmf/common.sh@478 -- # killprocess 77806 00:12:05.019 00:46:17 -- common/autotest_common.sh@936 -- # '[' -z 77806 ']' 00:12:05.019 00:46:17 -- common/autotest_common.sh@940 -- # kill -0 77806 00:12:05.019 00:46:17 -- common/autotest_common.sh@941 -- # uname 00:12:05.019 00:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.019 00:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77806 00:12:05.019 killing process with pid 77806 00:12:05.019 00:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:05.019 00:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:05.019 00:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77806' 00:12:05.019 00:46:17 -- common/autotest_common.sh@955 -- # kill 77806 00:12:05.019 00:46:17 -- common/autotest_common.sh@960 -- # wait 77806 00:12:05.276 00:46:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:05.277 00:46:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:05.277 00:46:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:05.277 00:46:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.277 00:46:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:05.277 00:46:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.277 00:46:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.277 00:46:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.535 00:46:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:05.535 ************************************ 00:12:05.535 END TEST nvmf_multitarget 00:12:05.535 ************************************ 00:12:05.535 00:12:05.535 real 0m2.920s 00:12:05.535 user 0m9.369s 00:12:05.535 sys 0m0.724s 00:12:05.535 00:46:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:05.535 00:46:17 -- common/autotest_common.sh@10 -- # set +x 00:12:05.535 00:46:17 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.535 00:46:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:05.535 00:46:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.535 00:46:17 -- common/autotest_common.sh@10 -- # set +x 00:12:05.535 ************************************ 00:12:05.535 START TEST nvmf_rpc 00:12:05.535 ************************************ 00:12:05.535 00:46:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.535 * Looking for test storage... 00:12:05.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:05.535 00:46:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:05.535 00:46:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:05.535 00:46:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:05.535 00:46:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:05.535 00:46:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:05.535 00:46:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:05.535 00:46:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:05.535 00:46:18 -- scripts/common.sh@335 -- # IFS=.-: 00:12:05.535 00:46:18 -- scripts/common.sh@335 -- # read -ra ver1 00:12:05.535 00:46:18 -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.535 00:46:18 -- scripts/common.sh@336 -- # read -ra ver2 00:12:05.535 00:46:18 -- scripts/common.sh@337 -- # local 'op=<' 00:12:05.535 00:46:18 -- scripts/common.sh@339 -- # ver1_l=2 00:12:05.535 00:46:18 -- scripts/common.sh@340 -- # ver2_l=1 00:12:05.535 00:46:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:05.535 00:46:18 -- scripts/common.sh@343 -- # case "$op" in 00:12:05.535 00:46:18 -- scripts/common.sh@344 -- # : 1 00:12:05.535 00:46:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:05.535 00:46:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.535 00:46:18 -- scripts/common.sh@364 -- # decimal 1 00:12:05.535 00:46:18 -- scripts/common.sh@352 -- # local d=1 00:12:05.535 00:46:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.535 00:46:18 -- scripts/common.sh@354 -- # echo 1 00:12:05.535 00:46:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:05.535 00:46:18 -- scripts/common.sh@365 -- # decimal 2 00:12:05.535 00:46:18 -- scripts/common.sh@352 -- # local d=2 00:12:05.535 00:46:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.535 00:46:18 -- scripts/common.sh@354 -- # echo 2 00:12:05.535 00:46:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:05.535 00:46:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:05.535 00:46:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:05.535 00:46:18 -- scripts/common.sh@367 -- # return 0 00:12:05.535 00:46:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.535 00:46:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.535 --rc genhtml_branch_coverage=1 00:12:05.535 --rc genhtml_function_coverage=1 00:12:05.535 --rc genhtml_legend=1 00:12:05.535 --rc geninfo_all_blocks=1 00:12:05.535 --rc geninfo_unexecuted_blocks=1 00:12:05.535 00:12:05.535 ' 00:12:05.535 00:46:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.535 --rc genhtml_branch_coverage=1 00:12:05.535 --rc genhtml_function_coverage=1 00:12:05.535 --rc genhtml_legend=1 00:12:05.535 --rc geninfo_all_blocks=1 00:12:05.535 --rc geninfo_unexecuted_blocks=1 00:12:05.535 00:12:05.535 ' 00:12:05.535 00:46:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.535 --rc genhtml_branch_coverage=1 00:12:05.535 --rc genhtml_function_coverage=1 00:12:05.535 --rc genhtml_legend=1 00:12:05.535 --rc geninfo_all_blocks=1 00:12:05.535 --rc geninfo_unexecuted_blocks=1 00:12:05.535 00:12:05.535 ' 00:12:05.535 00:46:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.535 --rc genhtml_branch_coverage=1 00:12:05.535 --rc genhtml_function_coverage=1 00:12:05.535 --rc genhtml_legend=1 00:12:05.535 --rc geninfo_all_blocks=1 00:12:05.535 --rc geninfo_unexecuted_blocks=1 00:12:05.535 00:12:05.535 ' 00:12:05.535 00:46:18 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.535 00:46:18 -- nvmf/common.sh@7 -- # uname -s 00:12:05.535 00:46:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.535 00:46:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.535 00:46:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.535 00:46:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.535 00:46:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.535 00:46:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.535 00:46:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.535 00:46:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.535 00:46:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.535 00:46:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.794 00:46:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:05.794 00:46:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:05.794 00:46:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.794 00:46:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.794 00:46:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.794 00:46:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.794 00:46:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.794 00:46:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.794 00:46:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.794 00:46:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.794 00:46:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.794 00:46:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.794 00:46:18 -- paths/export.sh@5 -- # export PATH 00:12:05.794 00:46:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.794 00:46:18 -- nvmf/common.sh@46 -- # : 0 00:12:05.794 00:46:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:05.794 00:46:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:05.794 00:46:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:05.794 00:46:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.794 00:46:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.795 00:46:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:05.795 00:46:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:05.795 00:46:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:05.795 00:46:18 -- target/rpc.sh@11 -- # loops=5 00:12:05.795 00:46:18 -- target/rpc.sh@23 -- # nvmftestinit 00:12:05.795 00:46:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:05.795 00:46:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.795 00:46:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:05.795 00:46:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:05.795 00:46:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:05.795 00:46:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.795 00:46:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.795 00:46:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.795 00:46:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:05.795 00:46:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:05.795 00:46:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:05.795 00:46:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:05.795 00:46:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:05.795 00:46:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:05.795 00:46:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.795 00:46:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.795 00:46:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:05.795 00:46:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:05.795 00:46:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.795 00:46:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.795 00:46:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.795 00:46:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.795 00:46:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.795 00:46:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.795 00:46:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.795 00:46:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.795 00:46:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:05.795 00:46:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:05.795 Cannot find device "nvmf_tgt_br" 00:12:05.795 00:46:18 -- nvmf/common.sh@154 -- # true 00:12:05.795 00:46:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.795 Cannot find device "nvmf_tgt_br2" 00:12:05.795 00:46:18 -- nvmf/common.sh@155 -- # true 00:12:05.795 00:46:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:05.795 00:46:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:05.795 Cannot find device "nvmf_tgt_br" 00:12:05.795 00:46:18 -- nvmf/common.sh@157 -- # true 00:12:05.795 00:46:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:05.795 Cannot find device "nvmf_tgt_br2" 00:12:05.795 00:46:18 -- nvmf/common.sh@158 -- # true 00:12:05.795 00:46:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:05.795 00:46:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:05.795 00:46:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.795 00:46:18 -- nvmf/common.sh@161 -- # true 00:12:05.795 00:46:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.795 00:46:18 -- nvmf/common.sh@162 -- # true 00:12:05.795 00:46:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.795 00:46:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.795 00:46:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.795 00:46:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.795 00:46:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.795 00:46:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.795 00:46:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.795 00:46:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:05.795 00:46:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:05.795 00:46:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:05.795 00:46:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:05.795 00:46:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:05.795 00:46:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:05.795 00:46:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.795 00:46:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.795 00:46:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:06.054 00:46:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:06.054 00:46:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:06.054 00:46:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:06.054 00:46:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:06.054 00:46:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.054 00:46:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.054 00:46:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.054 00:46:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:06.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:12:06.054 00:12:06.054 --- 10.0.0.2 ping statistics --- 00:12:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.054 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:06.054 00:46:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:06.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:06.054 00:12:06.054 --- 10.0.0.3 ping statistics --- 00:12:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.054 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:06.054 00:46:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:06.054 00:12:06.054 --- 10.0.0.1 ping statistics --- 00:12:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.054 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:06.054 00:46:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.054 00:46:18 -- nvmf/common.sh@421 -- # return 0 00:12:06.054 00:46:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:06.054 00:46:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.054 00:46:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:06.054 00:46:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:06.054 00:46:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.054 00:46:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:06.054 00:46:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:06.054 00:46:18 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:06.054 00:46:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:06.054 00:46:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.054 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:06.054 00:46:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.054 00:46:18 -- nvmf/common.sh@469 -- # nvmfpid=78041 00:12:06.054 00:46:18 -- nvmf/common.sh@470 -- # waitforlisten 78041 00:12:06.054 00:46:18 -- common/autotest_common.sh@829 -- # '[' -z 78041 ']' 00:12:06.054 00:46:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.054 00:46:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.054 00:46:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.054 00:46:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.054 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:06.054 [2024-12-03 00:46:18.457589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:06.054 [2024-12-03 00:46:18.457653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.312 [2024-12-03 00:46:18.592260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.312 [2024-12-03 00:46:18.664924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:06.312 [2024-12-03 00:46:18.665439] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.312 [2024-12-03 00:46:18.665577] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.312 [2024-12-03 00:46:18.665812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.312 [2024-12-03 00:46:18.666098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.312 [2024-12-03 00:46:18.666214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.312 [2024-12-03 00:46:18.666291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.312 [2024-12-03 00:46:18.666294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.248 00:46:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.248 00:46:19 -- common/autotest_common.sh@862 -- # return 0 00:12:07.248 00:46:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:07.248 00:46:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:07.248 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.248 00:46:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.248 00:46:19 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:07.248 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.248 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.248 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.248 00:46:19 -- target/rpc.sh@26 -- # stats='{ 00:12:07.248 "poll_groups": [ 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_0", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [] 00:12:07.248 }, 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_1", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [] 00:12:07.248 }, 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_2", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [] 00:12:07.248 }, 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_3", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [] 00:12:07.248 } 00:12:07.248 ], 00:12:07.248 "tick_rate": 2200000000 00:12:07.248 }' 00:12:07.248 00:46:19 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:07.248 00:46:19 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:07.248 00:46:19 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:07.248 00:46:19 -- target/rpc.sh@15 -- # wc -l 00:12:07.248 00:46:19 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:07.248 00:46:19 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:07.248 00:46:19 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:07.248 00:46:19 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.248 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.248 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.248 [2024-12-03 00:46:19.661571] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.248 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.248 00:46:19 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:07.248 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.248 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.248 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.248 00:46:19 -- target/rpc.sh@33 -- # stats='{ 00:12:07.248 "poll_groups": [ 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_0", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [ 00:12:07.248 { 00:12:07.248 "trtype": "TCP" 00:12:07.248 } 00:12:07.248 ] 00:12:07.248 }, 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_1", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [ 00:12:07.248 { 00:12:07.248 "trtype": "TCP" 00:12:07.248 } 00:12:07.248 ] 00:12:07.248 }, 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_2", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [ 00:12:07.248 { 00:12:07.248 "trtype": "TCP" 00:12:07.248 } 00:12:07.248 ] 00:12:07.248 }, 00:12:07.248 { 00:12:07.248 "admin_qpairs": 0, 00:12:07.248 "completed_nvme_io": 0, 00:12:07.248 "current_admin_qpairs": 0, 00:12:07.248 "current_io_qpairs": 0, 00:12:07.248 "io_qpairs": 0, 00:12:07.248 "name": "nvmf_tgt_poll_group_3", 00:12:07.248 "pending_bdev_io": 0, 00:12:07.248 "transports": [ 00:12:07.248 { 00:12:07.248 "trtype": "TCP" 00:12:07.248 } 00:12:07.248 ] 00:12:07.248 } 00:12:07.248 ], 00:12:07.248 "tick_rate": 2200000000 00:12:07.248 }' 00:12:07.248 00:46:19 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:07.248 00:46:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:07.248 00:46:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.248 00:46:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:07.248 00:46:19 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:07.248 00:46:19 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:07.248 00:46:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:07.248 00:46:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:07.248 00:46:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.508 00:46:19 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:07.508 00:46:19 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:07.508 00:46:19 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:07.508 00:46:19 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:07.508 00:46:19 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:07.508 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.508 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.508 Malloc1 00:12:07.508 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.508 00:46:19 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.508 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.508 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.508 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.508 00:46:19 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.508 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.508 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.508 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.508 00:46:19 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:07.508 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.508 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.508 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.508 00:46:19 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.508 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.508 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.508 [2024-12-03 00:46:19.854003] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.508 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.508 00:46:19 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 -a 10.0.0.2 -s 4420 00:12:07.508 00:46:19 -- common/autotest_common.sh@650 -- # local es=0 00:12:07.508 00:46:19 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 -a 10.0.0.2 -s 4420 00:12:07.508 00:46:19 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:07.508 00:46:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.508 00:46:19 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:07.508 00:46:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.508 00:46:19 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:07.508 00:46:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.508 00:46:19 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:07.508 00:46:19 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:07.508 00:46:19 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 -a 10.0.0.2 -s 4420 00:12:07.508 [2024-12-03 00:46:19.882214] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8' 00:12:07.508 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:07.508 could not add new controller: failed to write to nvme-fabrics device 00:12:07.508 00:46:19 -- common/autotest_common.sh@653 -- # es=1 00:12:07.508 00:46:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.508 00:46:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.509 00:46:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.509 00:46:19 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:07.509 00:46:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.509 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:12:07.509 00:46:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.509 00:46:19 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.767 00:46:20 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.767 00:46:20 -- common/autotest_common.sh@1187 -- # local i=0 00:12:07.767 00:46:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.767 00:46:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:07.767 00:46:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:09.671 00:46:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:09.671 00:46:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:09.671 00:46:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.671 00:46:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:09.671 00:46:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.671 00:46:22 -- common/autotest_common.sh@1197 -- # return 0 00:12:09.671 00:46:22 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.930 00:46:22 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.930 00:46:22 -- common/autotest_common.sh@1208 -- # local i=0 00:12:09.930 00:46:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:09.930 00:46:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.930 00:46:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.930 00:46:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:09.930 00:46:22 -- common/autotest_common.sh@1220 -- # return 0 00:12:09.930 00:46:22 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:09.930 00:46:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.930 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:12:09.930 00:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.931 00:46:22 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.931 00:46:22 -- common/autotest_common.sh@650 -- # local es=0 00:12:09.931 00:46:22 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.931 00:46:22 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:09.931 00:46:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.931 00:46:22 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:09.931 00:46:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.931 00:46:22 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:09.931 00:46:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.931 00:46:22 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:09.931 00:46:22 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:09.931 00:46:22 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.931 [2024-12-03 00:46:22.283888] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8' 00:12:09.931 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:09.931 could not add new controller: failed to write to nvme-fabrics device 00:12:09.931 00:46:22 -- common/autotest_common.sh@653 -- # es=1 00:12:09.931 00:46:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.931 00:46:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:09.931 00:46:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.931 00:46:22 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:09.931 00:46:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.931 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:12:09.931 00:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.931 00:46:22 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.190 00:46:22 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.190 00:46:22 -- common/autotest_common.sh@1187 -- # local i=0 00:12:10.190 00:46:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.190 00:46:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:10.190 00:46:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:12.114 00:46:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:12.114 00:46:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:12.114 00:46:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.114 00:46:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:12.114 00:46:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.114 00:46:24 -- common/autotest_common.sh@1197 -- # return 0 00:12:12.114 00:46:24 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.114 00:46:24 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.114 00:46:24 -- common/autotest_common.sh@1208 -- # local i=0 00:12:12.114 00:46:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:12.114 00:46:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.114 00:46:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:12.114 00:46:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.114 00:46:24 -- common/autotest_common.sh@1220 -- # return 0 00:12:12.114 00:46:24 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.114 00:46:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 00:46:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 00:46:24 -- target/rpc.sh@81 -- # seq 1 5 00:12:12.114 00:46:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.114 00:46:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.114 00:46:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 00:46:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 00:46:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.114 00:46:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 [2024-12-03 00:46:24.583819] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.114 00:46:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 00:46:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.114 00:46:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 00:46:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 00:46:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.114 00:46:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 00:46:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 00:46:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.389 00:46:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.389 00:46:24 -- common/autotest_common.sh@1187 -- # local i=0 00:12:12.389 00:46:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.389 00:46:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:12.389 00:46:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:14.290 00:46:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:14.290 00:46:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:14.290 00:46:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.290 00:46:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:14.290 00:46:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.290 00:46:26 -- common/autotest_common.sh@1197 -- # return 0 00:12:14.290 00:46:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.549 00:46:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.549 00:46:26 -- common/autotest_common.sh@1208 -- # local i=0 00:12:14.549 00:46:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:14.549 00:46:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.549 00:46:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:14.549 00:46:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.549 00:46:26 -- common/autotest_common.sh@1220 -- # return 0 00:12:14.549 00:46:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.549 00:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.549 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 00:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.549 00:46:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.549 00:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.549 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 00:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.549 00:46:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.549 00:46:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.549 00:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.549 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 00:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.549 00:46:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.549 00:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.549 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 [2024-12-03 00:46:26.891975] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.549 00:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.549 00:46:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.549 00:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.549 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 00:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.549 00:46:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.549 00:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.549 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:14.549 00:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.549 00:46:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.809 00:46:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.809 00:46:27 -- common/autotest_common.sh@1187 -- # local i=0 00:12:14.809 00:46:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.809 00:46:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:14.809 00:46:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:16.713 00:46:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:16.713 00:46:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:16.713 00:46:29 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.713 00:46:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:16.713 00:46:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.713 00:46:29 -- common/autotest_common.sh@1197 -- # return 0 00:12:16.713 00:46:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.972 00:46:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.972 00:46:29 -- common/autotest_common.sh@1208 -- # local i=0 00:12:16.972 00:46:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:16.972 00:46:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.972 00:46:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:16.972 00:46:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.972 00:46:29 -- common/autotest_common.sh@1220 -- # return 0 00:12:16.972 00:46:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.972 00:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.972 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.972 00:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.972 00:46:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.972 00:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.972 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.972 00:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.972 00:46:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:16.972 00:46:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.972 00:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.972 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.972 00:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.972 00:46:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.972 00:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.972 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.973 [2024-12-03 00:46:29.304746] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.973 00:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.973 00:46:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:16.973 00:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.973 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.973 00:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.973 00:46:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.973 00:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.973 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.973 00:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.973 00:46:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.232 00:46:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.232 00:46:29 -- common/autotest_common.sh@1187 -- # local i=0 00:12:17.232 00:46:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.232 00:46:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:17.232 00:46:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:19.135 00:46:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:19.136 00:46:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:19.136 00:46:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.136 00:46:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:19.136 00:46:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.136 00:46:31 -- common/autotest_common.sh@1197 -- # return 0 00:12:19.136 00:46:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.136 00:46:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.136 00:46:31 -- common/autotest_common.sh@1208 -- # local i=0 00:12:19.136 00:46:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:19.136 00:46:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.136 00:46:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.136 00:46:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:19.136 00:46:31 -- common/autotest_common.sh@1220 -- # return 0 00:12:19.136 00:46:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.136 00:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.136 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.136 00:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.136 00:46:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.136 00:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.136 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.136 00:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.136 00:46:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:19.136 00:46:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.136 00:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.136 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.136 00:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.136 00:46:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.136 00:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.136 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.136 [2024-12-03 00:46:31.609637] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.136 00:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.136 00:46:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:19.136 00:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.136 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.136 00:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.136 00:46:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.136 00:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.136 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.136 00:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.136 00:46:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.395 00:46:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.395 00:46:31 -- common/autotest_common.sh@1187 -- # local i=0 00:12:19.395 00:46:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.395 00:46:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:19.395 00:46:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:21.928 00:46:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:21.928 00:46:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:21.928 00:46:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.928 00:46:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:21.928 00:46:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.928 00:46:33 -- common/autotest_common.sh@1197 -- # return 0 00:12:21.928 00:46:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.928 00:46:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.928 00:46:33 -- common/autotest_common.sh@1208 -- # local i=0 00:12:21.928 00:46:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.928 00:46:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:21.928 00:46:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:21.928 00:46:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.928 00:46:33 -- common/autotest_common.sh@1220 -- # return 0 00:12:21.928 00:46:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.928 00:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.928 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:21.928 00:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.928 00:46:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.928 00:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.928 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:21.928 00:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.928 00:46:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.928 00:46:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.928 00:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.928 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:21.928 00:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.928 00:46:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.928 00:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.928 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:21.928 [2024-12-03 00:46:33.930676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.928 00:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.928 00:46:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.928 00:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.928 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:21.928 00:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.928 00:46:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.928 00:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.928 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:21.928 00:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.928 00:46:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.928 00:46:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.928 00:46:34 -- common/autotest_common.sh@1187 -- # local i=0 00:12:21.928 00:46:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.928 00:46:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:21.929 00:46:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:23.830 00:46:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:23.830 00:46:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:23.830 00:46:36 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:23.830 00:46:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.830 00:46:36 -- common/autotest_common.sh@1197 -- # return 0 00:12:23.830 00:46:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.830 00:46:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@1208 -- # local i=0 00:12:23.830 00:46:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:23.830 00:46:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:23.830 00:46:36 -- common/autotest_common.sh@1220 -- # return 0 00:12:23.830 00:46:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@99 -- # seq 1 5 00:12:23.830 00:46:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:23.830 00:46:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 [2024-12-03 00:46:36.235111] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:23.830 00:46:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 [2024-12-03 00:46:36.283177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.830 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.830 00:46:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:23.830 00:46:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.830 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.830 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.831 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.831 00:46:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.831 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.831 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.831 [2024-12-03 00:46:36.335274] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.831 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.831 00:46:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.831 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.831 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.089 00:46:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 [2024-12-03 00:46:36.383326] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.089 00:46:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 [2024-12-03 00:46:36.431398] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.089 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.089 00:46:36 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:24.089 00:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.089 00:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:24.090 00:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.090 00:46:36 -- target/rpc.sh@110 -- # stats='{ 00:12:24.090 "poll_groups": [ 00:12:24.090 { 00:12:24.090 "admin_qpairs": 2, 00:12:24.090 "completed_nvme_io": 115, 00:12:24.090 "current_admin_qpairs": 0, 00:12:24.090 "current_io_qpairs": 0, 00:12:24.090 "io_qpairs": 16, 00:12:24.090 "name": "nvmf_tgt_poll_group_0", 00:12:24.090 "pending_bdev_io": 0, 00:12:24.090 "transports": [ 00:12:24.090 { 00:12:24.090 "trtype": "TCP" 00:12:24.090 } 00:12:24.090 ] 00:12:24.090 }, 00:12:24.090 { 00:12:24.090 "admin_qpairs": 3, 00:12:24.090 "completed_nvme_io": 67, 00:12:24.090 "current_admin_qpairs": 0, 00:12:24.090 "current_io_qpairs": 0, 00:12:24.090 "io_qpairs": 17, 00:12:24.090 "name": "nvmf_tgt_poll_group_1", 00:12:24.090 "pending_bdev_io": 0, 00:12:24.090 "transports": [ 00:12:24.090 { 00:12:24.090 "trtype": "TCP" 00:12:24.090 } 00:12:24.090 ] 00:12:24.090 }, 00:12:24.090 { 00:12:24.090 "admin_qpairs": 1, 00:12:24.090 "completed_nvme_io": 170, 00:12:24.090 "current_admin_qpairs": 0, 00:12:24.090 "current_io_qpairs": 0, 00:12:24.090 "io_qpairs": 19, 00:12:24.090 "name": "nvmf_tgt_poll_group_2", 00:12:24.090 "pending_bdev_io": 0, 00:12:24.090 "transports": [ 00:12:24.090 { 00:12:24.090 "trtype": "TCP" 00:12:24.090 } 00:12:24.090 ] 00:12:24.090 }, 00:12:24.090 { 00:12:24.090 "admin_qpairs": 1, 00:12:24.090 "completed_nvme_io": 68, 00:12:24.090 "current_admin_qpairs": 0, 00:12:24.090 "current_io_qpairs": 0, 00:12:24.090 "io_qpairs": 18, 00:12:24.090 "name": "nvmf_tgt_poll_group_3", 00:12:24.090 "pending_bdev_io": 0, 00:12:24.090 "transports": [ 00:12:24.090 { 00:12:24.090 "trtype": "TCP" 00:12:24.090 } 00:12:24.090 ] 00:12:24.090 } 00:12:24.090 ], 00:12:24.090 "tick_rate": 2200000000 00:12:24.090 }' 00:12:24.090 00:46:36 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:24.090 00:46:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:24.090 00:46:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.090 00:46:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:24.090 00:46:36 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:24.090 00:46:36 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:24.090 00:46:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:24.090 00:46:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.090 00:46:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:24.348 00:46:36 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:24.348 00:46:36 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:24.348 00:46:36 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:24.348 00:46:36 -- target/rpc.sh@123 -- # nvmftestfini 00:12:24.348 00:46:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:24.348 00:46:36 -- nvmf/common.sh@116 -- # sync 00:12:24.348 00:46:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:24.348 00:46:36 -- nvmf/common.sh@119 -- # set +e 00:12:24.348 00:46:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:24.348 00:46:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:24.348 rmmod nvme_tcp 00:12:24.348 rmmod nvme_fabrics 00:12:24.348 rmmod nvme_keyring 00:12:24.348 00:46:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:24.348 00:46:36 -- nvmf/common.sh@123 -- # set -e 00:12:24.348 00:46:36 -- nvmf/common.sh@124 -- # return 0 00:12:24.348 00:46:36 -- nvmf/common.sh@477 -- # '[' -n 78041 ']' 00:12:24.348 00:46:36 -- nvmf/common.sh@478 -- # killprocess 78041 00:12:24.348 00:46:36 -- common/autotest_common.sh@936 -- # '[' -z 78041 ']' 00:12:24.348 00:46:36 -- common/autotest_common.sh@940 -- # kill -0 78041 00:12:24.348 00:46:36 -- common/autotest_common.sh@941 -- # uname 00:12:24.348 00:46:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:24.348 00:46:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78041 00:12:24.348 00:46:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:24.348 00:46:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:24.348 killing process with pid 78041 00:12:24.348 00:46:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78041' 00:12:24.348 00:46:36 -- common/autotest_common.sh@955 -- # kill 78041 00:12:24.348 00:46:36 -- common/autotest_common.sh@960 -- # wait 78041 00:12:24.606 00:46:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:24.606 00:46:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:24.606 00:46:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:24.606 00:46:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.606 00:46:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:24.606 00:46:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.606 00:46:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.606 00:46:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.606 00:46:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:24.606 00:12:24.606 real 0m19.182s 00:12:24.606 user 1m12.758s 00:12:24.606 sys 0m2.060s 00:12:24.607 ************************************ 00:12:24.607 END TEST nvmf_rpc 00:12:24.607 ************************************ 00:12:24.607 00:46:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.607 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 00:46:37 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:24.607 00:46:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:24.607 00:46:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.607 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 ************************************ 00:12:24.607 START TEST nvmf_invalid 00:12:24.607 ************************************ 00:12:24.607 00:46:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:24.866 * Looking for test storage... 00:12:24.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:24.866 00:46:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:24.866 00:46:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:24.866 00:46:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:24.866 00:46:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:24.866 00:46:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:24.866 00:46:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:24.866 00:46:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:24.866 00:46:37 -- scripts/common.sh@335 -- # IFS=.-: 00:12:24.866 00:46:37 -- scripts/common.sh@335 -- # read -ra ver1 00:12:24.866 00:46:37 -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.866 00:46:37 -- scripts/common.sh@336 -- # read -ra ver2 00:12:24.866 00:46:37 -- scripts/common.sh@337 -- # local 'op=<' 00:12:24.866 00:46:37 -- scripts/common.sh@339 -- # ver1_l=2 00:12:24.866 00:46:37 -- scripts/common.sh@340 -- # ver2_l=1 00:12:24.866 00:46:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:24.866 00:46:37 -- scripts/common.sh@343 -- # case "$op" in 00:12:24.866 00:46:37 -- scripts/common.sh@344 -- # : 1 00:12:24.866 00:46:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:24.866 00:46:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.866 00:46:37 -- scripts/common.sh@364 -- # decimal 1 00:12:24.866 00:46:37 -- scripts/common.sh@352 -- # local d=1 00:12:24.866 00:46:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.866 00:46:37 -- scripts/common.sh@354 -- # echo 1 00:12:24.866 00:46:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:24.866 00:46:37 -- scripts/common.sh@365 -- # decimal 2 00:12:24.866 00:46:37 -- scripts/common.sh@352 -- # local d=2 00:12:24.866 00:46:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.866 00:46:37 -- scripts/common.sh@354 -- # echo 2 00:12:24.866 00:46:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:24.866 00:46:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:24.866 00:46:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:24.866 00:46:37 -- scripts/common.sh@367 -- # return 0 00:12:24.866 00:46:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.866 00:46:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:24.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.866 --rc genhtml_branch_coverage=1 00:12:24.866 --rc genhtml_function_coverage=1 00:12:24.866 --rc genhtml_legend=1 00:12:24.866 --rc geninfo_all_blocks=1 00:12:24.866 --rc geninfo_unexecuted_blocks=1 00:12:24.866 00:12:24.866 ' 00:12:24.866 00:46:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:24.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.866 --rc genhtml_branch_coverage=1 00:12:24.866 --rc genhtml_function_coverage=1 00:12:24.866 --rc genhtml_legend=1 00:12:24.866 --rc geninfo_all_blocks=1 00:12:24.866 --rc geninfo_unexecuted_blocks=1 00:12:24.866 00:12:24.866 ' 00:12:24.866 00:46:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:24.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.866 --rc genhtml_branch_coverage=1 00:12:24.866 --rc genhtml_function_coverage=1 00:12:24.866 --rc genhtml_legend=1 00:12:24.866 --rc geninfo_all_blocks=1 00:12:24.866 --rc geninfo_unexecuted_blocks=1 00:12:24.866 00:12:24.866 ' 00:12:24.866 00:46:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:24.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.866 --rc genhtml_branch_coverage=1 00:12:24.866 --rc genhtml_function_coverage=1 00:12:24.866 --rc genhtml_legend=1 00:12:24.866 --rc geninfo_all_blocks=1 00:12:24.866 --rc geninfo_unexecuted_blocks=1 00:12:24.866 00:12:24.866 ' 00:12:24.866 00:46:37 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.866 00:46:37 -- nvmf/common.sh@7 -- # uname -s 00:12:24.866 00:46:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.866 00:46:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.866 00:46:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.866 00:46:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.866 00:46:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.866 00:46:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.866 00:46:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.866 00:46:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.866 00:46:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.866 00:46:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.866 00:46:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:24.866 00:46:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:24.866 00:46:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.866 00:46:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.866 00:46:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.866 00:46:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.866 00:46:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.866 00:46:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.866 00:46:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.866 00:46:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.866 00:46:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.866 00:46:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.866 00:46:37 -- paths/export.sh@5 -- # export PATH 00:12:24.866 00:46:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.866 00:46:37 -- nvmf/common.sh@46 -- # : 0 00:12:24.866 00:46:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:24.866 00:46:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:24.866 00:46:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:24.866 00:46:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.866 00:46:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.866 00:46:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:24.866 00:46:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:24.866 00:46:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:24.866 00:46:37 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.866 00:46:37 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.866 00:46:37 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:24.866 00:46:37 -- target/invalid.sh@14 -- # target=foobar 00:12:24.866 00:46:37 -- target/invalid.sh@16 -- # RANDOM=0 00:12:24.866 00:46:37 -- target/invalid.sh@34 -- # nvmftestinit 00:12:24.866 00:46:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:24.866 00:46:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.866 00:46:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:24.866 00:46:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:24.866 00:46:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:24.866 00:46:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.866 00:46:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.866 00:46:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.867 00:46:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:24.867 00:46:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:24.867 00:46:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:24.867 00:46:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:24.867 00:46:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:24.867 00:46:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:24.867 00:46:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.867 00:46:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.867 00:46:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:24.867 00:46:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:24.867 00:46:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.867 00:46:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.867 00:46:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.867 00:46:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.867 00:46:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.867 00:46:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.867 00:46:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.867 00:46:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.867 00:46:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:24.867 00:46:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:24.867 Cannot find device "nvmf_tgt_br" 00:12:24.867 00:46:37 -- nvmf/common.sh@154 -- # true 00:12:24.867 00:46:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.867 Cannot find device "nvmf_tgt_br2" 00:12:24.867 00:46:37 -- nvmf/common.sh@155 -- # true 00:12:24.867 00:46:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:24.867 00:46:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:24.867 Cannot find device "nvmf_tgt_br" 00:12:24.867 00:46:37 -- nvmf/common.sh@157 -- # true 00:12:24.867 00:46:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:24.867 Cannot find device "nvmf_tgt_br2" 00:12:24.867 00:46:37 -- nvmf/common.sh@158 -- # true 00:12:24.867 00:46:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:25.125 00:46:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:25.125 00:46:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.125 00:46:37 -- nvmf/common.sh@161 -- # true 00:12:25.125 00:46:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.125 00:46:37 -- nvmf/common.sh@162 -- # true 00:12:25.125 00:46:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.125 00:46:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.125 00:46:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.125 00:46:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.125 00:46:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.125 00:46:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.125 00:46:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.125 00:46:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.125 00:46:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.125 00:46:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:25.125 00:46:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:25.125 00:46:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:25.125 00:46:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:25.125 00:46:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.125 00:46:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.125 00:46:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.125 00:46:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:25.125 00:46:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:25.125 00:46:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.125 00:46:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.125 00:46:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.125 00:46:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.125 00:46:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.125 00:46:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:25.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:25.125 00:12:25.125 --- 10.0.0.2 ping statistics --- 00:12:25.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.125 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:25.125 00:46:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:25.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:12:25.125 00:12:25.125 --- 10.0.0.3 ping statistics --- 00:12:25.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.125 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:25.125 00:46:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:25.125 00:12:25.125 --- 10.0.0.1 ping statistics --- 00:12:25.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.125 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:25.125 00:46:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.125 00:46:37 -- nvmf/common.sh@421 -- # return 0 00:12:25.125 00:46:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:25.125 00:46:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.125 00:46:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:25.125 00:46:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:25.125 00:46:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.125 00:46:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:25.125 00:46:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:25.125 00:46:37 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:25.125 00:46:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:25.125 00:46:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.125 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:25.125 00:46:37 -- nvmf/common.sh@469 -- # nvmfpid=78566 00:12:25.125 00:46:37 -- nvmf/common.sh@470 -- # waitforlisten 78566 00:12:25.126 00:46:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.126 00:46:37 -- common/autotest_common.sh@829 -- # '[' -z 78566 ']' 00:12:25.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.126 00:46:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.126 00:46:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.126 00:46:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.126 00:46:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.126 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:25.384 [2024-12-03 00:46:37.685152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:25.384 [2024-12-03 00:46:37.685396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.384 [2024-12-03 00:46:37.819979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.384 [2024-12-03 00:46:37.891853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.384 [2024-12-03 00:46:37.892338] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.384 [2024-12-03 00:46:37.892470] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.384 [2024-12-03 00:46:37.892610] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.384 [2024-12-03 00:46:37.892849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.384 [2024-12-03 00:46:37.892944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.384 [2024-12-03 00:46:37.893038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.384 [2024-12-03 00:46:37.893043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.321 00:46:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.321 00:46:38 -- common/autotest_common.sh@862 -- # return 0 00:12:26.321 00:46:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.321 00:46:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.321 00:46:38 -- common/autotest_common.sh@10 -- # set +x 00:12:26.321 00:46:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.321 00:46:38 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.321 00:46:38 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17708 00:12:26.580 [2024-12-03 00:46:39.028808] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:26.580 00:46:39 -- target/invalid.sh@40 -- # out='2024/12/03 00:46:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17708 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:26.580 request: 00:12:26.580 { 00:12:26.580 "method": "nvmf_create_subsystem", 00:12:26.580 "params": { 00:12:26.580 "nqn": "nqn.2016-06.io.spdk:cnode17708", 00:12:26.580 "tgt_name": "foobar" 00:12:26.580 } 00:12:26.580 } 00:12:26.580 Got JSON-RPC error response 00:12:26.580 GoRPCClient: error on JSON-RPC call' 00:12:26.580 00:46:39 -- target/invalid.sh@41 -- # [[ 2024/12/03 00:46:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17708 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:26.580 request: 00:12:26.580 { 00:12:26.580 "method": "nvmf_create_subsystem", 00:12:26.580 "params": { 00:12:26.580 "nqn": "nqn.2016-06.io.spdk:cnode17708", 00:12:26.580 "tgt_name": "foobar" 00:12:26.580 } 00:12:26.580 } 00:12:26.580 Got JSON-RPC error response 00:12:26.580 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:26.580 00:46:39 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:26.580 00:46:39 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22999 00:12:26.839 [2024-12-03 00:46:39.293156] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22999: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:26.839 00:46:39 -- target/invalid.sh@45 -- # out='2024/12/03 00:46:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22999 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:26.839 request: 00:12:26.839 { 00:12:26.839 "method": "nvmf_create_subsystem", 00:12:26.839 "params": { 00:12:26.839 "nqn": "nqn.2016-06.io.spdk:cnode22999", 00:12:26.839 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:26.839 } 00:12:26.839 } 00:12:26.839 Got JSON-RPC error response 00:12:26.839 GoRPCClient: error on JSON-RPC call' 00:12:26.839 00:46:39 -- target/invalid.sh@46 -- # [[ 2024/12/03 00:46:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22999 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:26.839 request: 00:12:26.839 { 00:12:26.839 "method": "nvmf_create_subsystem", 00:12:26.839 "params": { 00:12:26.839 "nqn": "nqn.2016-06.io.spdk:cnode22999", 00:12:26.839 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:26.839 } 00:12:26.839 } 00:12:26.839 Got JSON-RPC error response 00:12:26.839 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.839 00:46:39 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:26.839 00:46:39 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode716 00:12:27.097 [2024-12-03 00:46:39.601539] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode716: invalid model number 'SPDK_Controller' 00:12:27.357 00:46:39 -- target/invalid.sh@50 -- # out='2024/12/03 00:46:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode716], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:27.357 request: 00:12:27.357 { 00:12:27.357 "method": "nvmf_create_subsystem", 00:12:27.357 "params": { 00:12:27.357 "nqn": "nqn.2016-06.io.spdk:cnode716", 00:12:27.357 "model_number": "SPDK_Controller\u001f" 00:12:27.357 } 00:12:27.357 } 00:12:27.357 Got JSON-RPC error response 00:12:27.357 GoRPCClient: error on JSON-RPC call' 00:12:27.357 00:46:39 -- target/invalid.sh@51 -- # [[ 2024/12/03 00:46:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode716], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:27.357 request: 00:12:27.357 { 00:12:27.357 "method": "nvmf_create_subsystem", 00:12:27.357 "params": { 00:12:27.357 "nqn": "nqn.2016-06.io.spdk:cnode716", 00:12:27.357 "model_number": "SPDK_Controller\u001f" 00:12:27.357 } 00:12:27.357 } 00:12:27.357 Got JSON-RPC error response 00:12:27.357 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:27.357 00:46:39 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:27.357 00:46:39 -- target/invalid.sh@19 -- # local length=21 ll 00:12:27.357 00:46:39 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.357 00:46:39 -- target/invalid.sh@21 -- # local chars 00:12:27.357 00:46:39 -- target/invalid.sh@22 -- # local string 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 32 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=' ' 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 91 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+='[' 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 103 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=g 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 68 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=D 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 64 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=@ 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 113 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=q 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 72 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=H 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 107 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=k 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 112 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=p 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 41 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=')' 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 125 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+='}' 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 108 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=l 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 78 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=N 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 76 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=L 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 122 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=z 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 86 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=V 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 41 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=')' 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 49 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=1 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 51 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=3 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 71 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=G 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # printf %x 67 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:27.357 00:46:39 -- target/invalid.sh@25 -- # string+=C 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.357 00:46:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.357 00:46:39 -- target/invalid.sh@28 -- # [[ == \- ]] 00:12:27.357 00:46:39 -- target/invalid.sh@31 -- # echo ' [gD@qHkp)}lNLzV)13GC' 00:12:27.357 00:46:39 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ' [gD@qHkp)}lNLzV)13GC' nqn.2016-06.io.spdk:cnode7050 00:12:27.617 [2024-12-03 00:46:40.006024] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7050: invalid serial number ' [gD@qHkp)}lNLzV)13GC' 00:12:27.617 00:46:40 -- target/invalid.sh@54 -- # out='2024/12/03 00:46:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7050 serial_number: [gD@qHkp)}lNLzV)13GC], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN [gD@qHkp)}lNLzV)13GC 00:12:27.617 request: 00:12:27.617 { 00:12:27.617 "method": "nvmf_create_subsystem", 00:12:27.617 "params": { 00:12:27.617 "nqn": "nqn.2016-06.io.spdk:cnode7050", 00:12:27.617 "serial_number": " [gD@qHkp)}lNLzV)13GC" 00:12:27.617 } 00:12:27.617 } 00:12:27.617 Got JSON-RPC error response 00:12:27.617 GoRPCClient: error on JSON-RPC call' 00:12:27.617 00:46:40 -- target/invalid.sh@55 -- # [[ 2024/12/03 00:46:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7050 serial_number: [gD@qHkp)}lNLzV)13GC], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN [gD@qHkp)}lNLzV)13GC 00:12:27.617 request: 00:12:27.617 { 00:12:27.617 "method": "nvmf_create_subsystem", 00:12:27.617 "params": { 00:12:27.617 "nqn": "nqn.2016-06.io.spdk:cnode7050", 00:12:27.617 "serial_number": " [gD@qHkp)}lNLzV)13GC" 00:12:27.617 } 00:12:27.617 } 00:12:27.617 Got JSON-RPC error response 00:12:27.617 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:27.617 00:46:40 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:27.617 00:46:40 -- target/invalid.sh@19 -- # local length=41 ll 00:12:27.617 00:46:40 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.617 00:46:40 -- target/invalid.sh@21 -- # local chars 00:12:27.617 00:46:40 -- target/invalid.sh@22 -- # local string 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 102 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=f 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 90 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=Z 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 104 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=h 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 110 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=n 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 39 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=\' 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 51 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=3 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 119 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=w 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 49 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=1 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 51 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=3 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 70 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=F 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 69 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=E 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 48 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=0 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 48 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=0 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 74 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=J 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 35 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+='#' 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 109 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=m 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # printf %x 74 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:27.617 00:46:40 -- target/invalid.sh@25 -- # string+=J 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.617 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 76 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=L 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 44 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=, 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 104 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=h 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 88 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=X 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 99 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=c 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 118 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=v 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 112 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=p 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 126 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+='~' 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 114 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=r 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.876 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # printf %x 52 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:27.876 00:46:40 -- target/invalid.sh@25 -- # string+=4 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 77 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=M 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 47 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=/ 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 73 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=I 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 117 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=u 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 70 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=F 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 72 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=H 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 51 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=3 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 42 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+='*' 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 35 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+='#' 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 127 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 67 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=C 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 49 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=1 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 101 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=e 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # printf %x 43 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:27.877 00:46:40 -- target/invalid.sh@25 -- # string+=+ 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.877 00:46:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.877 00:46:40 -- target/invalid.sh@28 -- # [[ f == \- ]] 00:12:27.877 00:46:40 -- target/invalid.sh@31 -- # echo 'fZhn'\''3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+' 00:12:27.877 00:46:40 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'fZhn'\''3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+' nqn.2016-06.io.spdk:cnode13471 00:12:28.136 [2024-12-03 00:46:40.522854] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13471: invalid model number 'fZhn'3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+' 00:12:28.136 00:46:40 -- target/invalid.sh@58 -- # out='2024/12/03 00:46:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:fZhn'\''3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+ nqn:nqn.2016-06.io.spdk:cnode13471], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN fZhn'\''3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+ 00:12:28.136 request: 00:12:28.136 { 00:12:28.136 "method": "nvmf_create_subsystem", 00:12:28.136 "params": { 00:12:28.136 "nqn": "nqn.2016-06.io.spdk:cnode13471", 00:12:28.136 "model_number": "fZhn'\''3w13FE00J#mJL,hXcvp~r4M/IuFH3*#\u007fC1e+" 00:12:28.136 } 00:12:28.136 } 00:12:28.136 Got JSON-RPC error response 00:12:28.136 GoRPCClient: error on JSON-RPC call' 00:12:28.136 00:46:40 -- target/invalid.sh@59 -- # [[ 2024/12/03 00:46:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:fZhn'3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+ nqn:nqn.2016-06.io.spdk:cnode13471], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN fZhn'3w13FE00J#mJL,hXcvp~r4M/IuFH3*#C1e+ 00:12:28.136 request: 00:12:28.136 { 00:12:28.136 "method": "nvmf_create_subsystem", 00:12:28.136 "params": { 00:12:28.136 "nqn": "nqn.2016-06.io.spdk:cnode13471", 00:12:28.136 "model_number": "fZhn'3w13FE00J#mJL,hXcvp~r4M/IuFH3*#\u007fC1e+" 00:12:28.136 } 00:12:28.136 } 00:12:28.136 Got JSON-RPC error response 00:12:28.136 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:28.136 00:46:40 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:28.395 [2024-12-03 00:46:40.823280] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.395 00:46:40 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:28.653 00:46:41 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:28.653 00:46:41 -- target/invalid.sh@67 -- # echo '' 00:12:28.653 00:46:41 -- target/invalid.sh@67 -- # head -n 1 00:12:28.653 00:46:41 -- target/invalid.sh@67 -- # IP= 00:12:28.653 00:46:41 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:28.911 [2024-12-03 00:46:41.397696] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:28.911 00:46:41 -- target/invalid.sh@69 -- # out='2024/12/03 00:46:41 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:28.911 request: 00:12:28.911 { 00:12:28.911 "method": "nvmf_subsystem_remove_listener", 00:12:28.911 "params": { 00:12:28.911 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:28.911 "listen_address": { 00:12:28.911 "trtype": "tcp", 00:12:28.911 "traddr": "", 00:12:28.911 "trsvcid": "4421" 00:12:28.911 } 00:12:28.911 } 00:12:28.911 } 00:12:28.911 Got JSON-RPC error response 00:12:28.911 GoRPCClient: error on JSON-RPC call' 00:12:28.911 00:46:41 -- target/invalid.sh@70 -- # [[ 2024/12/03 00:46:41 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:28.911 request: 00:12:28.911 { 00:12:28.911 "method": "nvmf_subsystem_remove_listener", 00:12:28.911 "params": { 00:12:28.911 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:28.911 "listen_address": { 00:12:28.911 "trtype": "tcp", 00:12:28.911 "traddr": "", 00:12:28.911 "trsvcid": "4421" 00:12:28.911 } 00:12:28.911 } 00:12:28.911 } 00:12:28.911 Got JSON-RPC error response 00:12:28.911 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:28.911 00:46:41 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14702 -i 0 00:12:29.169 [2024-12-03 00:46:41.665985] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14702: invalid cntlid range [0-65519] 00:12:29.427 00:46:41 -- target/invalid.sh@73 -- # out='2024/12/03 00:46:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14702], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:29.427 request: 00:12:29.427 { 00:12:29.427 "method": "nvmf_create_subsystem", 00:12:29.427 "params": { 00:12:29.427 "nqn": "nqn.2016-06.io.spdk:cnode14702", 00:12:29.427 "min_cntlid": 0 00:12:29.427 } 00:12:29.427 } 00:12:29.427 Got JSON-RPC error response 00:12:29.427 GoRPCClient: error on JSON-RPC call' 00:12:29.427 00:46:41 -- target/invalid.sh@74 -- # [[ 2024/12/03 00:46:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14702], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:29.427 request: 00:12:29.427 { 00:12:29.427 "method": "nvmf_create_subsystem", 00:12:29.427 "params": { 00:12:29.427 "nqn": "nqn.2016-06.io.spdk:cnode14702", 00:12:29.427 "min_cntlid": 0 00:12:29.427 } 00:12:29.427 } 00:12:29.427 Got JSON-RPC error response 00:12:29.427 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.427 00:46:41 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20137 -i 65520 00:12:29.686 [2024-12-03 00:46:41.958395] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20137: invalid cntlid range [65520-65519] 00:12:29.687 00:46:41 -- target/invalid.sh@75 -- # out='2024/12/03 00:46:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20137], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:29.687 request: 00:12:29.687 { 00:12:29.687 "method": "nvmf_create_subsystem", 00:12:29.687 "params": { 00:12:29.687 "nqn": "nqn.2016-06.io.spdk:cnode20137", 00:12:29.687 "min_cntlid": 65520 00:12:29.687 } 00:12:29.687 } 00:12:29.687 Got JSON-RPC error response 00:12:29.687 GoRPCClient: error on JSON-RPC call' 00:12:29.687 00:46:41 -- target/invalid.sh@76 -- # [[ 2024/12/03 00:46:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20137], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:29.687 request: 00:12:29.687 { 00:12:29.687 "method": "nvmf_create_subsystem", 00:12:29.687 "params": { 00:12:29.687 "nqn": "nqn.2016-06.io.spdk:cnode20137", 00:12:29.687 "min_cntlid": 65520 00:12:29.687 } 00:12:29.687 } 00:12:29.687 Got JSON-RPC error response 00:12:29.687 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.687 00:46:41 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode751 -I 0 00:12:29.946 [2024-12-03 00:46:42.222842] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode751: invalid cntlid range [1-0] 00:12:29.946 00:46:42 -- target/invalid.sh@77 -- # out='2024/12/03 00:46:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode751], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:29.946 request: 00:12:29.946 { 00:12:29.946 "method": "nvmf_create_subsystem", 00:12:29.946 "params": { 00:12:29.946 "nqn": "nqn.2016-06.io.spdk:cnode751", 00:12:29.946 "max_cntlid": 0 00:12:29.946 } 00:12:29.946 } 00:12:29.946 Got JSON-RPC error response 00:12:29.946 GoRPCClient: error on JSON-RPC call' 00:12:29.946 00:46:42 -- target/invalid.sh@78 -- # [[ 2024/12/03 00:46:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode751], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:29.946 request: 00:12:29.946 { 00:12:29.946 "method": "nvmf_create_subsystem", 00:12:29.946 "params": { 00:12:29.946 "nqn": "nqn.2016-06.io.spdk:cnode751", 00:12:29.946 "max_cntlid": 0 00:12:29.946 } 00:12:29.946 } 00:12:29.946 Got JSON-RPC error response 00:12:29.946 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.946 00:46:42 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11325 -I 65520 00:12:30.227 [2024-12-03 00:46:42.499224] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11325: invalid cntlid range [1-65520] 00:12:30.227 00:46:42 -- target/invalid.sh@79 -- # out='2024/12/03 00:46:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11325], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:30.227 request: 00:12:30.227 { 00:12:30.227 "method": "nvmf_create_subsystem", 00:12:30.227 "params": { 00:12:30.227 "nqn": "nqn.2016-06.io.spdk:cnode11325", 00:12:30.227 "max_cntlid": 65520 00:12:30.227 } 00:12:30.227 } 00:12:30.227 Got JSON-RPC error response 00:12:30.227 GoRPCClient: error on JSON-RPC call' 00:12:30.227 00:46:42 -- target/invalid.sh@80 -- # [[ 2024/12/03 00:46:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11325], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:30.227 request: 00:12:30.227 { 00:12:30.227 "method": "nvmf_create_subsystem", 00:12:30.227 "params": { 00:12:30.227 "nqn": "nqn.2016-06.io.spdk:cnode11325", 00:12:30.227 "max_cntlid": 65520 00:12:30.227 } 00:12:30.227 } 00:12:30.227 Got JSON-RPC error response 00:12:30.227 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.227 00:46:42 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28957 -i 6 -I 5 00:12:30.486 [2024-12-03 00:46:42.791668] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28957: invalid cntlid range [6-5] 00:12:30.486 00:46:42 -- target/invalid.sh@83 -- # out='2024/12/03 00:46:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28957], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:30.486 request: 00:12:30.486 { 00:12:30.486 "method": "nvmf_create_subsystem", 00:12:30.486 "params": { 00:12:30.486 "nqn": "nqn.2016-06.io.spdk:cnode28957", 00:12:30.486 "min_cntlid": 6, 00:12:30.486 "max_cntlid": 5 00:12:30.486 } 00:12:30.486 } 00:12:30.486 Got JSON-RPC error response 00:12:30.486 GoRPCClient: error on JSON-RPC call' 00:12:30.486 00:46:42 -- target/invalid.sh@84 -- # [[ 2024/12/03 00:46:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28957], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:30.486 request: 00:12:30.486 { 00:12:30.486 "method": "nvmf_create_subsystem", 00:12:30.486 "params": { 00:12:30.486 "nqn": "nqn.2016-06.io.spdk:cnode28957", 00:12:30.486 "min_cntlid": 6, 00:12:30.486 "max_cntlid": 5 00:12:30.486 } 00:12:30.486 } 00:12:30.486 Got JSON-RPC error response 00:12:30.486 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.486 00:46:42 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:30.486 00:46:42 -- target/invalid.sh@87 -- # out='request: 00:12:30.486 { 00:12:30.486 "name": "foobar", 00:12:30.486 "method": "nvmf_delete_target", 00:12:30.486 "req_id": 1 00:12:30.486 } 00:12:30.486 Got JSON-RPC error response 00:12:30.486 response: 00:12:30.486 { 00:12:30.486 "code": -32602, 00:12:30.486 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:30.486 }' 00:12:30.486 00:46:42 -- target/invalid.sh@88 -- # [[ request: 00:12:30.486 { 00:12:30.486 "name": "foobar", 00:12:30.486 "method": "nvmf_delete_target", 00:12:30.486 "req_id": 1 00:12:30.486 } 00:12:30.486 Got JSON-RPC error response 00:12:30.486 response: 00:12:30.486 { 00:12:30.486 "code": -32602, 00:12:30.486 "message": "The specified target doesn't exist, cannot delete it." 00:12:30.486 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:30.486 00:46:42 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:30.486 00:46:42 -- target/invalid.sh@91 -- # nvmftestfini 00:12:30.486 00:46:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.486 00:46:42 -- nvmf/common.sh@116 -- # sync 00:12:30.486 00:46:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.486 00:46:42 -- nvmf/common.sh@119 -- # set +e 00:12:30.486 00:46:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.486 00:46:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.486 rmmod nvme_tcp 00:12:30.743 rmmod nvme_fabrics 00:12:30.743 rmmod nvme_keyring 00:12:30.743 00:46:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.743 00:46:43 -- nvmf/common.sh@123 -- # set -e 00:12:30.743 00:46:43 -- nvmf/common.sh@124 -- # return 0 00:12:30.743 00:46:43 -- nvmf/common.sh@477 -- # '[' -n 78566 ']' 00:12:30.743 00:46:43 -- nvmf/common.sh@478 -- # killprocess 78566 00:12:30.743 00:46:43 -- common/autotest_common.sh@936 -- # '[' -z 78566 ']' 00:12:30.743 00:46:43 -- common/autotest_common.sh@940 -- # kill -0 78566 00:12:30.743 00:46:43 -- common/autotest_common.sh@941 -- # uname 00:12:30.743 00:46:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.743 00:46:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78566 00:12:30.743 killing process with pid 78566 00:12:30.743 00:46:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.743 00:46:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.743 00:46:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78566' 00:12:30.743 00:46:43 -- common/autotest_common.sh@955 -- # kill 78566 00:12:30.743 00:46:43 -- common/autotest_common.sh@960 -- # wait 78566 00:12:31.001 00:46:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:31.001 00:46:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:31.001 00:46:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:31.001 00:46:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.001 00:46:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:31.001 00:46:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.001 00:46:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.001 00:46:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.001 00:46:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.001 ************************************ 00:12:31.001 END TEST nvmf_invalid 00:12:31.001 ************************************ 00:12:31.001 00:12:31.001 real 0m6.269s 00:12:31.001 user 0m25.207s 00:12:31.001 sys 0m1.374s 00:12:31.001 00:46:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.001 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:31.001 00:46:43 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.001 00:46:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.001 00:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.001 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:31.001 ************************************ 00:12:31.001 START TEST nvmf_abort 00:12:31.001 ************************************ 00:12:31.001 00:46:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.001 * Looking for test storage... 00:12:31.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.001 00:46:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.001 00:46:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.001 00:46:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.259 00:46:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.259 00:46:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.259 00:46:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.259 00:46:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.259 00:46:43 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.259 00:46:43 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.259 00:46:43 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.259 00:46:43 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.259 00:46:43 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.259 00:46:43 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.259 00:46:43 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.259 00:46:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.259 00:46:43 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.259 00:46:43 -- scripts/common.sh@344 -- # : 1 00:12:31.259 00:46:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.259 00:46:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.259 00:46:43 -- scripts/common.sh@364 -- # decimal 1 00:12:31.259 00:46:43 -- scripts/common.sh@352 -- # local d=1 00:12:31.259 00:46:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.259 00:46:43 -- scripts/common.sh@354 -- # echo 1 00:12:31.259 00:46:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.259 00:46:43 -- scripts/common.sh@365 -- # decimal 2 00:12:31.259 00:46:43 -- scripts/common.sh@352 -- # local d=2 00:12:31.259 00:46:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.259 00:46:43 -- scripts/common.sh@354 -- # echo 2 00:12:31.259 00:46:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.259 00:46:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.259 00:46:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.259 00:46:43 -- scripts/common.sh@367 -- # return 0 00:12:31.259 00:46:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.259 00:46:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.259 --rc genhtml_branch_coverage=1 00:12:31.259 --rc genhtml_function_coverage=1 00:12:31.259 --rc genhtml_legend=1 00:12:31.259 --rc geninfo_all_blocks=1 00:12:31.259 --rc geninfo_unexecuted_blocks=1 00:12:31.259 00:12:31.259 ' 00:12:31.259 00:46:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.259 --rc genhtml_branch_coverage=1 00:12:31.259 --rc genhtml_function_coverage=1 00:12:31.259 --rc genhtml_legend=1 00:12:31.259 --rc geninfo_all_blocks=1 00:12:31.259 --rc geninfo_unexecuted_blocks=1 00:12:31.259 00:12:31.259 ' 00:12:31.259 00:46:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.259 --rc genhtml_branch_coverage=1 00:12:31.259 --rc genhtml_function_coverage=1 00:12:31.259 --rc genhtml_legend=1 00:12:31.259 --rc geninfo_all_blocks=1 00:12:31.259 --rc geninfo_unexecuted_blocks=1 00:12:31.259 00:12:31.259 ' 00:12:31.259 00:46:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.259 --rc genhtml_branch_coverage=1 00:12:31.259 --rc genhtml_function_coverage=1 00:12:31.259 --rc genhtml_legend=1 00:12:31.259 --rc geninfo_all_blocks=1 00:12:31.259 --rc geninfo_unexecuted_blocks=1 00:12:31.259 00:12:31.259 ' 00:12:31.259 00:46:43 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.259 00:46:43 -- nvmf/common.sh@7 -- # uname -s 00:12:31.259 00:46:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.259 00:46:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.259 00:46:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.259 00:46:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.259 00:46:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.259 00:46:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.259 00:46:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.259 00:46:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.259 00:46:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.259 00:46:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.259 00:46:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:31.259 00:46:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:31.259 00:46:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.259 00:46:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.259 00:46:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.259 00:46:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.259 00:46:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.259 00:46:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.259 00:46:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.259 00:46:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.259 00:46:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.259 00:46:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.259 00:46:43 -- paths/export.sh@5 -- # export PATH 00:12:31.259 00:46:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.259 00:46:43 -- nvmf/common.sh@46 -- # : 0 00:12:31.259 00:46:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.259 00:46:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.259 00:46:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.259 00:46:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.259 00:46:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.259 00:46:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.259 00:46:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.259 00:46:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.259 00:46:43 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.259 00:46:43 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:31.259 00:46:43 -- target/abort.sh@14 -- # nvmftestinit 00:12:31.259 00:46:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.259 00:46:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.259 00:46:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.259 00:46:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.259 00:46:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.259 00:46:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.259 00:46:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.259 00:46:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.259 00:46:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.259 00:46:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.259 00:46:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.259 00:46:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.259 00:46:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.259 00:46:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.259 00:46:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.259 00:46:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.259 00:46:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.259 00:46:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.259 00:46:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.259 00:46:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.260 00:46:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.260 00:46:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.260 00:46:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.260 00:46:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.260 00:46:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.260 00:46:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.260 00:46:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.260 00:46:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.260 Cannot find device "nvmf_tgt_br" 00:12:31.260 00:46:43 -- nvmf/common.sh@154 -- # true 00:12:31.260 00:46:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.260 Cannot find device "nvmf_tgt_br2" 00:12:31.260 00:46:43 -- nvmf/common.sh@155 -- # true 00:12:31.260 00:46:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.260 00:46:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.260 Cannot find device "nvmf_tgt_br" 00:12:31.260 00:46:43 -- nvmf/common.sh@157 -- # true 00:12:31.260 00:46:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.260 Cannot find device "nvmf_tgt_br2" 00:12:31.260 00:46:43 -- nvmf/common.sh@158 -- # true 00:12:31.260 00:46:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:31.260 00:46:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:31.260 00:46:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.260 00:46:43 -- nvmf/common.sh@161 -- # true 00:12:31.260 00:46:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.260 00:46:43 -- nvmf/common.sh@162 -- # true 00:12:31.260 00:46:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.260 00:46:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.260 00:46:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.260 00:46:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.260 00:46:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.260 00:46:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.519 00:46:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.519 00:46:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.519 00:46:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.519 00:46:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:31.519 00:46:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:31.519 00:46:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:31.519 00:46:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:31.519 00:46:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.519 00:46:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.519 00:46:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.519 00:46:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:31.519 00:46:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:31.519 00:46:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.519 00:46:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.519 00:46:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.519 00:46:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.519 00:46:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.519 00:46:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:31.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:12:31.519 00:12:31.519 --- 10.0.0.2 ping statistics --- 00:12:31.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.519 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:31.519 00:46:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:31.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:12:31.519 00:12:31.519 --- 10.0.0.3 ping statistics --- 00:12:31.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.519 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:31.519 00:46:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:31.519 00:12:31.519 --- 10.0.0.1 ping statistics --- 00:12:31.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.519 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:31.519 00:46:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.519 00:46:43 -- nvmf/common.sh@421 -- # return 0 00:12:31.519 00:46:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:31.519 00:46:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.519 00:46:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:31.519 00:46:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:31.519 00:46:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.519 00:46:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:31.519 00:46:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:31.519 00:46:43 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:31.519 00:46:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:31.519 00:46:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.519 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:31.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.519 00:46:43 -- nvmf/common.sh@469 -- # nvmfpid=79085 00:12:31.519 00:46:43 -- nvmf/common.sh@470 -- # waitforlisten 79085 00:12:31.519 00:46:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:31.519 00:46:43 -- common/autotest_common.sh@829 -- # '[' -z 79085 ']' 00:12:31.519 00:46:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.519 00:46:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.519 00:46:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.519 00:46:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.519 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:31.519 [2024-12-03 00:46:44.004026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.519 [2024-12-03 00:46:44.004284] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.777 [2024-12-03 00:46:44.142881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.777 [2024-12-03 00:46:44.214646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.777 [2024-12-03 00:46:44.215040] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.777 [2024-12-03 00:46:44.215094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.777 [2024-12-03 00:46:44.215232] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.777 [2024-12-03 00:46:44.215468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.777 [2024-12-03 00:46:44.215676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.777 [2024-12-03 00:46:44.215699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.726 00:46:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.726 00:46:44 -- common/autotest_common.sh@862 -- # return 0 00:12:32.726 00:46:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.726 00:46:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.726 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 00:46:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.726 00:46:45 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 [2024-12-03 00:46:45.013871] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 Malloc0 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 Delay0 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 [2024-12-03 00:46:45.080211] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.726 00:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.726 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 00:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.726 00:46:45 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:32.983 [2024-12-03 00:46:45.270273] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:34.886 Initializing NVMe Controllers 00:12:34.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:34.886 controller IO queue size 128 less than required 00:12:34.886 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:34.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:34.886 Initialization complete. Launching workers. 00:12:34.886 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37018 00:12:34.886 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37079, failed to submit 62 00:12:34.886 success 37018, unsuccess 61, failed 0 00:12:34.886 00:46:47 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:34.886 00:46:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.886 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:12:34.886 00:46:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.886 00:46:47 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:34.886 00:46:47 -- target/abort.sh@38 -- # nvmftestfini 00:12:34.886 00:46:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:34.886 00:46:47 -- nvmf/common.sh@116 -- # sync 00:12:34.886 00:46:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:34.886 00:46:47 -- nvmf/common.sh@119 -- # set +e 00:12:34.886 00:46:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:34.886 00:46:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:34.886 rmmod nvme_tcp 00:12:34.886 rmmod nvme_fabrics 00:12:35.145 rmmod nvme_keyring 00:12:35.145 00:46:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:35.145 00:46:47 -- nvmf/common.sh@123 -- # set -e 00:12:35.145 00:46:47 -- nvmf/common.sh@124 -- # return 0 00:12:35.145 00:46:47 -- nvmf/common.sh@477 -- # '[' -n 79085 ']' 00:12:35.145 00:46:47 -- nvmf/common.sh@478 -- # killprocess 79085 00:12:35.145 00:46:47 -- common/autotest_common.sh@936 -- # '[' -z 79085 ']' 00:12:35.145 00:46:47 -- common/autotest_common.sh@940 -- # kill -0 79085 00:12:35.145 00:46:47 -- common/autotest_common.sh@941 -- # uname 00:12:35.145 00:46:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:35.145 00:46:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79085 00:12:35.145 00:46:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:35.145 00:46:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:35.145 killing process with pid 79085 00:12:35.145 00:46:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79085' 00:12:35.145 00:46:47 -- common/autotest_common.sh@955 -- # kill 79085 00:12:35.145 00:46:47 -- common/autotest_common.sh@960 -- # wait 79085 00:12:35.404 00:46:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:35.404 00:46:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:35.404 00:46:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:35.404 00:46:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.404 00:46:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:35.404 00:46:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.404 00:46:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.404 00:46:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.404 00:46:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:35.404 00:12:35.404 real 0m4.390s 00:12:35.404 user 0m12.482s 00:12:35.404 sys 0m1.053s 00:12:35.404 00:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.404 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:12:35.404 ************************************ 00:12:35.404 END TEST nvmf_abort 00:12:35.404 ************************************ 00:12:35.404 00:46:47 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:35.404 00:46:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.404 00:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.404 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:12:35.404 ************************************ 00:12:35.404 START TEST nvmf_ns_hotplug_stress 00:12:35.404 ************************************ 00:12:35.404 00:46:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:35.664 * Looking for test storage... 00:12:35.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:35.664 00:46:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:35.664 00:46:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:35.664 00:46:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:35.664 00:46:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:35.664 00:46:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:35.664 00:46:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:35.664 00:46:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:35.664 00:46:48 -- scripts/common.sh@335 -- # IFS=.-: 00:12:35.664 00:46:48 -- scripts/common.sh@335 -- # read -ra ver1 00:12:35.664 00:46:48 -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.664 00:46:48 -- scripts/common.sh@336 -- # read -ra ver2 00:12:35.664 00:46:48 -- scripts/common.sh@337 -- # local 'op=<' 00:12:35.664 00:46:48 -- scripts/common.sh@339 -- # ver1_l=2 00:12:35.664 00:46:48 -- scripts/common.sh@340 -- # ver2_l=1 00:12:35.664 00:46:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:35.664 00:46:48 -- scripts/common.sh@343 -- # case "$op" in 00:12:35.664 00:46:48 -- scripts/common.sh@344 -- # : 1 00:12:35.664 00:46:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:35.664 00:46:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.664 00:46:48 -- scripts/common.sh@364 -- # decimal 1 00:12:35.664 00:46:48 -- scripts/common.sh@352 -- # local d=1 00:12:35.664 00:46:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.664 00:46:48 -- scripts/common.sh@354 -- # echo 1 00:12:35.664 00:46:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:35.664 00:46:48 -- scripts/common.sh@365 -- # decimal 2 00:12:35.664 00:46:48 -- scripts/common.sh@352 -- # local d=2 00:12:35.665 00:46:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.665 00:46:48 -- scripts/common.sh@354 -- # echo 2 00:12:35.665 00:46:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:35.665 00:46:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:35.665 00:46:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:35.665 00:46:48 -- scripts/common.sh@367 -- # return 0 00:12:35.665 00:46:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.665 00:46:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:35.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.665 --rc genhtml_branch_coverage=1 00:12:35.665 --rc genhtml_function_coverage=1 00:12:35.665 --rc genhtml_legend=1 00:12:35.665 --rc geninfo_all_blocks=1 00:12:35.665 --rc geninfo_unexecuted_blocks=1 00:12:35.665 00:12:35.665 ' 00:12:35.665 00:46:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:35.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.665 --rc genhtml_branch_coverage=1 00:12:35.665 --rc genhtml_function_coverage=1 00:12:35.665 --rc genhtml_legend=1 00:12:35.665 --rc geninfo_all_blocks=1 00:12:35.665 --rc geninfo_unexecuted_blocks=1 00:12:35.665 00:12:35.665 ' 00:12:35.665 00:46:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:35.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.665 --rc genhtml_branch_coverage=1 00:12:35.665 --rc genhtml_function_coverage=1 00:12:35.665 --rc genhtml_legend=1 00:12:35.665 --rc geninfo_all_blocks=1 00:12:35.665 --rc geninfo_unexecuted_blocks=1 00:12:35.665 00:12:35.665 ' 00:12:35.665 00:46:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:35.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.665 --rc genhtml_branch_coverage=1 00:12:35.665 --rc genhtml_function_coverage=1 00:12:35.665 --rc genhtml_legend=1 00:12:35.665 --rc geninfo_all_blocks=1 00:12:35.665 --rc geninfo_unexecuted_blocks=1 00:12:35.665 00:12:35.665 ' 00:12:35.665 00:46:48 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.665 00:46:48 -- nvmf/common.sh@7 -- # uname -s 00:12:35.665 00:46:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.665 00:46:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.665 00:46:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.665 00:46:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.665 00:46:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.665 00:46:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.665 00:46:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.665 00:46:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.665 00:46:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.665 00:46:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.665 00:46:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:35.665 00:46:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:12:35.665 00:46:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.665 00:46:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.665 00:46:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.665 00:46:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.665 00:46:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.665 00:46:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.665 00:46:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.665 00:46:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.665 00:46:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.665 00:46:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.665 00:46:48 -- paths/export.sh@5 -- # export PATH 00:12:35.665 00:46:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.665 00:46:48 -- nvmf/common.sh@46 -- # : 0 00:12:35.665 00:46:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:35.665 00:46:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:35.665 00:46:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:35.665 00:46:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.665 00:46:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.665 00:46:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:35.665 00:46:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:35.665 00:46:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:35.665 00:46:48 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.665 00:46:48 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:35.665 00:46:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:35.665 00:46:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.665 00:46:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:35.665 00:46:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:35.665 00:46:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:35.665 00:46:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.665 00:46:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.665 00:46:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.665 00:46:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:35.665 00:46:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:35.665 00:46:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:35.665 00:46:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:35.665 00:46:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:35.665 00:46:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:35.665 00:46:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.665 00:46:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.665 00:46:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.665 00:46:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:35.665 00:46:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.665 00:46:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.665 00:46:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.665 00:46:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.665 00:46:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.666 00:46:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.666 00:46:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.666 00:46:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.666 00:46:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:35.666 00:46:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:35.666 Cannot find device "nvmf_tgt_br" 00:12:35.666 00:46:48 -- nvmf/common.sh@154 -- # true 00:12:35.666 00:46:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.666 Cannot find device "nvmf_tgt_br2" 00:12:35.666 00:46:48 -- nvmf/common.sh@155 -- # true 00:12:35.666 00:46:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:35.666 00:46:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:35.666 Cannot find device "nvmf_tgt_br" 00:12:35.666 00:46:48 -- nvmf/common.sh@157 -- # true 00:12:35.666 00:46:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:35.666 Cannot find device "nvmf_tgt_br2" 00:12:35.666 00:46:48 -- nvmf/common.sh@158 -- # true 00:12:35.666 00:46:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:35.925 00:46:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:35.925 00:46:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.925 00:46:48 -- nvmf/common.sh@161 -- # true 00:12:35.925 00:46:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.925 00:46:48 -- nvmf/common.sh@162 -- # true 00:12:35.925 00:46:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.925 00:46:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.925 00:46:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.925 00:46:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.925 00:46:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.925 00:46:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.925 00:46:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.925 00:46:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.925 00:46:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.925 00:46:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:35.925 00:46:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:35.925 00:46:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:35.925 00:46:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:35.925 00:46:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.925 00:46:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.925 00:46:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.925 00:46:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:35.925 00:46:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:35.925 00:46:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.925 00:46:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.925 00:46:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.925 00:46:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.925 00:46:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.926 00:46:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:35.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:12:35.926 00:12:35.926 --- 10.0.0.2 ping statistics --- 00:12:35.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.926 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:35.926 00:46:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:35.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:35.926 00:12:35.926 --- 10.0.0.3 ping statistics --- 00:12:35.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.926 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:35.926 00:46:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:35.926 00:12:35.926 --- 10.0.0.1 ping statistics --- 00:12:35.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.926 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:35.926 00:46:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.926 00:46:48 -- nvmf/common.sh@421 -- # return 0 00:12:35.926 00:46:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.926 00:46:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.926 00:46:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.926 00:46:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.926 00:46:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.926 00:46:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.926 00:46:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.926 00:46:48 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:35.926 00:46:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.926 00:46:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.926 00:46:48 -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 00:46:48 -- nvmf/common.sh@469 -- # nvmfpid=79358 00:12:35.926 00:46:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:35.926 00:46:48 -- nvmf/common.sh@470 -- # waitforlisten 79358 00:12:35.926 00:46:48 -- common/autotest_common.sh@829 -- # '[' -z 79358 ']' 00:12:35.926 00:46:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.926 00:46:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.926 00:46:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.926 00:46:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.926 00:46:48 -- common/autotest_common.sh@10 -- # set +x 00:12:36.185 [2024-12-03 00:46:48.442210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:36.185 [2024-12-03 00:46:48.442316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.185 [2024-12-03 00:46:48.580897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.185 [2024-12-03 00:46:48.652935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.185 [2024-12-03 00:46:48.653097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.185 [2024-12-03 00:46:48.653109] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.185 [2024-12-03 00:46:48.653118] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.185 [2024-12-03 00:46:48.653871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.185 [2024-12-03 00:46:48.653988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.185 [2024-12-03 00:46:48.653995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.123 00:46:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.123 00:46:49 -- common/autotest_common.sh@862 -- # return 0 00:12:37.123 00:46:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.123 00:46:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.123 00:46:49 -- common/autotest_common.sh@10 -- # set +x 00:12:37.123 00:46:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.123 00:46:49 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:37.123 00:46:49 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:37.381 [2024-12-03 00:46:49.803130] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.381 00:46:49 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.640 00:46:50 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.898 [2024-12-03 00:46:50.284036] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.898 00:46:50 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.157 00:46:50 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:38.416 Malloc0 00:12:38.416 00:46:50 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:38.674 Delay0 00:12:38.674 00:46:51 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.933 00:46:51 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:39.205 NULL1 00:12:39.206 00:46:51 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:39.495 00:46:51 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79489 00:12:39.495 00:46:51 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:39.495 00:46:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:39.495 00:46:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.877 Read completed with error (sct=0, sc=11) 00:12:40.877 00:46:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.877 00:46:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:40.877 00:46:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:41.136 true 00:12:41.136 00:46:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:41.136 00:46:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.073 00:46:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.073 00:46:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:42.073 00:46:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:42.335 true 00:12:42.335 00:46:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:42.335 00:46:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.595 00:46:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.853 00:46:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:42.853 00:46:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:42.853 true 00:12:42.853 00:46:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:42.853 00:46:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.786 00:46:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.045 00:46:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:44.045 00:46:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:44.303 true 00:12:44.303 00:46:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:44.303 00:46:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.562 00:46:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.820 00:46:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:44.820 00:46:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:45.078 true 00:12:45.078 00:46:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:45.078 00:46:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.014 00:46:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.273 00:46:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:46.273 00:46:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:46.273 true 00:12:46.273 00:46:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:46.273 00:46:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.531 00:46:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.790 00:46:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:46.790 00:46:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:47.048 true 00:12:47.048 00:46:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:47.048 00:46:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.983 00:47:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.241 00:47:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:48.241 00:47:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:48.500 true 00:12:48.500 00:47:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:48.500 00:47:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.500 00:47:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.759 00:47:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:48.759 00:47:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:49.018 true 00:12:49.018 00:47:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:49.018 00:47:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.953 00:47:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.213 00:47:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:50.213 00:47:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:50.472 true 00:12:50.472 00:47:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:50.472 00:47:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.730 00:47:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.989 00:47:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:50.989 00:47:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:50.989 true 00:12:50.989 00:47:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:50.989 00:47:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.926 00:47:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.185 00:47:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:52.185 00:47:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:52.443 true 00:12:52.443 00:47:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:52.443 00:47:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.702 00:47:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.959 00:47:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:52.959 00:47:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:52.959 true 00:12:52.959 00:47:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:52.959 00:47:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.902 00:47:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.160 00:47:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:54.161 00:47:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:54.419 true 00:12:54.419 00:47:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:54.419 00:47:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.677 00:47:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.936 00:47:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:54.936 00:47:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:54.936 true 00:12:54.936 00:47:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:54.936 00:47:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.873 00:47:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.132 00:47:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:56.132 00:47:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:56.390 true 00:12:56.390 00:47:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:56.390 00:47:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.648 00:47:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.906 00:47:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:56.906 00:47:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:57.163 true 00:12:57.163 00:47:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:57.163 00:47:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.422 00:47:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.680 00:47:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:57.680 00:47:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:57.938 true 00:12:57.938 00:47:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:57.938 00:47:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.871 00:47:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.130 00:47:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:59.130 00:47:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:59.388 true 00:12:59.388 00:47:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:12:59.388 00:47:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.323 00:47:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.323 00:47:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:00.323 00:47:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:00.581 true 00:13:00.581 00:47:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:00.581 00:47:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.839 00:47:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.097 00:47:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:01.097 00:47:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:01.355 true 00:13:01.355 00:47:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:01.355 00:47:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.289 00:47:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.546 00:47:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:02.546 00:47:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:02.546 true 00:13:02.803 00:47:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:02.803 00:47:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.803 00:47:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.081 00:47:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:03.081 00:47:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:03.338 true 00:13:03.338 00:47:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:03.338 00:47:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.273 00:47:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.530 00:47:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:04.530 00:47:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:04.794 true 00:13:04.794 00:47:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:04.794 00:47:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.794 00:47:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.076 00:47:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:05.076 00:47:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:05.349 true 00:13:05.349 00:47:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:05.349 00:47:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.284 00:47:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.541 00:47:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:06.541 00:47:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:06.800 true 00:13:06.800 00:47:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:06.800 00:47:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.058 00:47:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.058 00:47:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:07.058 00:47:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:07.315 true 00:13:07.315 00:47:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:07.315 00:47:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.248 00:47:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.507 00:47:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:08.507 00:47:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:08.765 true 00:13:08.765 00:47:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:08.765 00:47:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.023 00:47:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.282 00:47:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:09.282 00:47:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:09.540 true 00:13:09.540 00:47:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:09.540 00:47:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.540 Initializing NVMe Controllers 00:13:09.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.540 Controller IO queue size 128, less than required. 00:13:09.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.540 Controller IO queue size 128, less than required. 00:13:09.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:09.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:09.540 Initialization complete. Launching workers. 00:13:09.540 ======================================================== 00:13:09.540 Latency(us) 00:13:09.540 Device Information : IOPS MiB/s Average min max 00:13:09.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 597.97 0.29 116472.39 2436.82 1033555.68 00:13:09.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14742.10 7.20 8682.33 2401.58 478822.00 00:13:09.540 ======================================================== 00:13:09.540 Total : 15340.08 7.49 12884.11 2401.58 1033555.68 00:13:09.540 00:13:09.798 00:47:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.057 00:47:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:10.057 00:47:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:10.057 true 00:13:10.057 00:47:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79489 00:13:10.057 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79489) - No such process 00:13:10.057 00:47:22 -- target/ns_hotplug_stress.sh@53 -- # wait 79489 00:13:10.057 00:47:22 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.315 00:47:22 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.574 00:47:23 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:10.574 00:47:23 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:10.574 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:10.574 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.574 00:47:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:10.833 null0 00:13:10.833 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:10.833 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.833 00:47:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:11.091 null1 00:13:11.091 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.091 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.091 00:47:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:11.350 null2 00:13:11.350 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.350 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.350 00:47:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:11.350 null3 00:13:11.350 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.350 00:47:23 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.350 00:47:23 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:11.917 null4 00:13:11.917 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.917 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.917 00:47:24 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:11.917 null5 00:13:12.175 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.175 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.175 00:47:24 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:12.175 null6 00:13:12.175 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.175 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.175 00:47:24 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:12.434 null7 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@66 -- # wait 80531 80532 80535 80537 80538 80540 80541 80544 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.434 00:47:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.693 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.693 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.693 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.693 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.693 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.951 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.208 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.466 00:47:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.724 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.982 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.239 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.497 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.498 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.498 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.498 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.498 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.498 00:47:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.498 00:47:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.755 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.013 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.271 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.530 00:47:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.530 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.530 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.530 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.788 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.045 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.045 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.045 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.045 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.045 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.045 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.046 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.303 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.561 00:47:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.561 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.820 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.079 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.338 00:47:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.597 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.597 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.597 00:47:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.597 00:47:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.597 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.597 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.597 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.597 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.597 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.597 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.598 00:47:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.598 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.598 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:17.857 00:47:30 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:17.857 00:47:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:17.857 00:47:30 -- nvmf/common.sh@116 -- # sync 00:13:17.857 00:47:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:17.857 00:47:30 -- nvmf/common.sh@119 -- # set +e 00:13:17.857 00:47:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:17.857 00:47:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:18.115 rmmod nvme_tcp 00:13:18.115 rmmod nvme_fabrics 00:13:18.115 rmmod nvme_keyring 00:13:18.115 00:47:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:18.115 00:47:30 -- nvmf/common.sh@123 -- # set -e 00:13:18.115 00:47:30 -- nvmf/common.sh@124 -- # return 0 00:13:18.115 00:47:30 -- nvmf/common.sh@477 -- # '[' -n 79358 ']' 00:13:18.115 00:47:30 -- nvmf/common.sh@478 -- # killprocess 79358 00:13:18.115 00:47:30 -- common/autotest_common.sh@936 -- # '[' -z 79358 ']' 00:13:18.116 00:47:30 -- common/autotest_common.sh@940 -- # kill -0 79358 00:13:18.116 00:47:30 -- common/autotest_common.sh@941 -- # uname 00:13:18.116 00:47:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.116 00:47:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79358 00:13:18.116 00:47:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:18.116 00:47:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:18.116 00:47:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79358' 00:13:18.116 killing process with pid 79358 00:13:18.116 00:47:30 -- common/autotest_common.sh@955 -- # kill 79358 00:13:18.116 00:47:30 -- common/autotest_common.sh@960 -- # wait 79358 00:13:18.374 00:47:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:18.374 00:47:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:18.374 00:47:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:18.374 00:47:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.374 00:47:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:18.374 00:47:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.374 00:47:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.374 00:47:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.374 00:47:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:18.374 00:13:18.374 real 0m42.893s 00:13:18.374 user 3m24.281s 00:13:18.374 sys 0m11.525s 00:13:18.374 00:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:18.374 ************************************ 00:13:18.374 END TEST nvmf_ns_hotplug_stress 00:13:18.374 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:13:18.374 ************************************ 00:13:18.374 00:47:30 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.374 00:47:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:18.374 00:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.374 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:13:18.374 ************************************ 00:13:18.374 START TEST nvmf_connect_stress 00:13:18.374 ************************************ 00:13:18.374 00:47:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.374 * Looking for test storage... 00:13:18.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.374 00:47:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:18.374 00:47:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:18.374 00:47:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:18.633 00:47:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:18.633 00:47:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:18.633 00:47:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:18.633 00:47:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:18.633 00:47:30 -- scripts/common.sh@335 -- # IFS=.-: 00:13:18.633 00:47:30 -- scripts/common.sh@335 -- # read -ra ver1 00:13:18.633 00:47:30 -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.633 00:47:30 -- scripts/common.sh@336 -- # read -ra ver2 00:13:18.633 00:47:30 -- scripts/common.sh@337 -- # local 'op=<' 00:13:18.633 00:47:30 -- scripts/common.sh@339 -- # ver1_l=2 00:13:18.633 00:47:30 -- scripts/common.sh@340 -- # ver2_l=1 00:13:18.633 00:47:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:18.633 00:47:30 -- scripts/common.sh@343 -- # case "$op" in 00:13:18.633 00:47:30 -- scripts/common.sh@344 -- # : 1 00:13:18.633 00:47:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:18.633 00:47:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.633 00:47:30 -- scripts/common.sh@364 -- # decimal 1 00:13:18.633 00:47:30 -- scripts/common.sh@352 -- # local d=1 00:13:18.633 00:47:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.633 00:47:30 -- scripts/common.sh@354 -- # echo 1 00:13:18.633 00:47:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:18.633 00:47:30 -- scripts/common.sh@365 -- # decimal 2 00:13:18.633 00:47:30 -- scripts/common.sh@352 -- # local d=2 00:13:18.633 00:47:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.633 00:47:30 -- scripts/common.sh@354 -- # echo 2 00:13:18.633 00:47:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:18.633 00:47:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:18.633 00:47:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:18.633 00:47:30 -- scripts/common.sh@367 -- # return 0 00:13:18.633 00:47:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.633 00:47:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:18.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.633 --rc genhtml_branch_coverage=1 00:13:18.633 --rc genhtml_function_coverage=1 00:13:18.633 --rc genhtml_legend=1 00:13:18.633 --rc geninfo_all_blocks=1 00:13:18.633 --rc geninfo_unexecuted_blocks=1 00:13:18.633 00:13:18.633 ' 00:13:18.633 00:47:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:18.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.633 --rc genhtml_branch_coverage=1 00:13:18.633 --rc genhtml_function_coverage=1 00:13:18.633 --rc genhtml_legend=1 00:13:18.633 --rc geninfo_all_blocks=1 00:13:18.633 --rc geninfo_unexecuted_blocks=1 00:13:18.633 00:13:18.633 ' 00:13:18.633 00:47:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:18.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.633 --rc genhtml_branch_coverage=1 00:13:18.633 --rc genhtml_function_coverage=1 00:13:18.633 --rc genhtml_legend=1 00:13:18.633 --rc geninfo_all_blocks=1 00:13:18.633 --rc geninfo_unexecuted_blocks=1 00:13:18.633 00:13:18.633 ' 00:13:18.633 00:47:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:18.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.633 --rc genhtml_branch_coverage=1 00:13:18.633 --rc genhtml_function_coverage=1 00:13:18.633 --rc genhtml_legend=1 00:13:18.633 --rc geninfo_all_blocks=1 00:13:18.633 --rc geninfo_unexecuted_blocks=1 00:13:18.633 00:13:18.633 ' 00:13:18.634 00:47:30 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.634 00:47:30 -- nvmf/common.sh@7 -- # uname -s 00:13:18.634 00:47:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.634 00:47:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.634 00:47:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.634 00:47:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.634 00:47:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.634 00:47:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.634 00:47:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.634 00:47:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.634 00:47:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.634 00:47:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.634 00:47:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:18.634 00:47:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:18.634 00:47:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.634 00:47:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.634 00:47:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.634 00:47:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.634 00:47:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.634 00:47:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.634 00:47:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.634 00:47:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.634 00:47:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.634 00:47:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.634 00:47:31 -- paths/export.sh@5 -- # export PATH 00:13:18.634 00:47:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.634 00:47:31 -- nvmf/common.sh@46 -- # : 0 00:13:18.634 00:47:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:18.634 00:47:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:18.634 00:47:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:18.634 00:47:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.634 00:47:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.634 00:47:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:18.634 00:47:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:18.634 00:47:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:18.634 00:47:31 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:18.634 00:47:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:18.634 00:47:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.634 00:47:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:18.634 00:47:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:18.634 00:47:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:18.634 00:47:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.634 00:47:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.634 00:47:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.634 00:47:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:18.634 00:47:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:18.634 00:47:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:18.634 00:47:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:18.634 00:47:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:18.634 00:47:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:18.634 00:47:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.634 00:47:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.634 00:47:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:18.634 00:47:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:18.634 00:47:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:18.634 00:47:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:18.634 00:47:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:18.634 00:47:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.634 00:47:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:18.634 00:47:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:18.634 00:47:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:18.634 00:47:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:18.634 00:47:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:18.634 00:47:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:18.634 Cannot find device "nvmf_tgt_br" 00:13:18.634 00:47:31 -- nvmf/common.sh@154 -- # true 00:13:18.634 00:47:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.634 Cannot find device "nvmf_tgt_br2" 00:13:18.634 00:47:31 -- nvmf/common.sh@155 -- # true 00:13:18.634 00:47:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:18.634 00:47:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:18.634 Cannot find device "nvmf_tgt_br" 00:13:18.634 00:47:31 -- nvmf/common.sh@157 -- # true 00:13:18.634 00:47:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:18.634 Cannot find device "nvmf_tgt_br2" 00:13:18.634 00:47:31 -- nvmf/common.sh@158 -- # true 00:13:18.634 00:47:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:18.634 00:47:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:18.893 00:47:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.893 00:47:31 -- nvmf/common.sh@161 -- # true 00:13:18.893 00:47:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.893 00:47:31 -- nvmf/common.sh@162 -- # true 00:13:18.893 00:47:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:18.893 00:47:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:18.893 00:47:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:18.893 00:47:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:18.893 00:47:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:18.893 00:47:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:18.893 00:47:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:18.893 00:47:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:18.893 00:47:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:18.893 00:47:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:18.893 00:47:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:18.893 00:47:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:18.893 00:47:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:18.893 00:47:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:18.893 00:47:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:18.893 00:47:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:18.893 00:47:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:18.893 00:47:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:18.893 00:47:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.893 00:47:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.893 00:47:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.893 00:47:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.893 00:47:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.893 00:47:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:18.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:18.893 00:13:18.893 --- 10.0.0.2 ping statistics --- 00:13:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.893 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:18.893 00:47:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:18.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:18.893 00:13:18.893 --- 10.0.0.3 ping statistics --- 00:13:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.893 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:18.893 00:47:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:18.893 00:13:18.893 --- 10.0.0.1 ping statistics --- 00:13:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.893 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:18.893 00:47:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.893 00:47:31 -- nvmf/common.sh@421 -- # return 0 00:13:18.893 00:47:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:18.893 00:47:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.893 00:47:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:18.893 00:47:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:18.893 00:47:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.893 00:47:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:18.893 00:47:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:18.893 00:47:31 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:18.893 00:47:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:18.893 00:47:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.893 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.893 00:47:31 -- nvmf/common.sh@469 -- # nvmfpid=81877 00:13:18.893 00:47:31 -- nvmf/common.sh@470 -- # waitforlisten 81877 00:13:18.893 00:47:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:18.893 00:47:31 -- common/autotest_common.sh@829 -- # '[' -z 81877 ']' 00:13:18.893 00:47:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.893 00:47:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.893 00:47:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.893 00:47:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.893 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:13:19.152 [2024-12-03 00:47:31.425401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:19.152 [2024-12-03 00:47:31.425490] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.152 [2024-12-03 00:47:31.562586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.152 [2024-12-03 00:47:31.641973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.152 [2024-12-03 00:47:31.642184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.152 [2024-12-03 00:47:31.642204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.152 [2024-12-03 00:47:31.642216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.152 [2024-12-03 00:47:31.642399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.152 [2024-12-03 00:47:31.643459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.152 [2024-12-03 00:47:31.643504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.089 00:47:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.089 00:47:32 -- common/autotest_common.sh@862 -- # return 0 00:13:20.089 00:47:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:20.089 00:47:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:20.089 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.089 00:47:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.089 00:47:32 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.089 00:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.089 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.089 [2024-12-03 00:47:32.406845] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.089 00:47:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.090 00:47:32 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:20.090 00:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.090 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 00:47:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.090 00:47:32 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.090 00:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.090 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 [2024-12-03 00:47:32.424459] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.090 00:47:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.090 00:47:32 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:20.090 00:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.090 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 NULL1 00:13:20.090 00:47:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.090 00:47:32 -- target/connect_stress.sh@21 -- # PERF_PID=81929 00:13:20.090 00:47:32 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:20.090 00:47:32 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:20.090 00:47:32 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.090 00:47:32 -- target/connect_stress.sh@28 -- # cat 00:13:20.090 00:47:32 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:20.090 00:47:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.090 00:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.090 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.349 00:47:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.349 00:47:32 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:20.349 00:47:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.349 00:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.349 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.917 00:47:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.917 00:47:33 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:20.917 00:47:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.917 00:47:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.917 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.175 00:47:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.175 00:47:33 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:21.175 00:47:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.176 00:47:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.176 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 00:47:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.435 00:47:33 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:21.435 00:47:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.435 00:47:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.694 00:47:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.694 00:47:34 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:21.694 00:47:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.694 00:47:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.694 00:47:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.954 00:47:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.954 00:47:34 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:21.954 00:47:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.954 00:47:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.954 00:47:34 -- common/autotest_common.sh@10 -- # set +x 00:13:22.531 00:47:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.531 00:47:34 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:22.531 00:47:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.531 00:47:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.531 00:47:34 -- common/autotest_common.sh@10 -- # set +x 00:13:22.789 00:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.789 00:47:35 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:22.789 00:47:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.789 00:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.789 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:13:23.047 00:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.047 00:47:35 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:23.047 00:47:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.047 00:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.047 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:13:23.305 00:47:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.305 00:47:35 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:23.305 00:47:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.305 00:47:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.305 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:13:23.873 00:47:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.873 00:47:36 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:23.873 00:47:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.873 00:47:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.873 00:47:36 -- common/autotest_common.sh@10 -- # set +x 00:13:24.132 00:47:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.132 00:47:36 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:24.132 00:47:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.132 00:47:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.132 00:47:36 -- common/autotest_common.sh@10 -- # set +x 00:13:24.390 00:47:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.390 00:47:36 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:24.390 00:47:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.390 00:47:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.390 00:47:36 -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 00:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.649 00:47:37 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:24.649 00:47:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.649 00:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.649 00:47:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.908 00:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.908 00:47:37 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:24.908 00:47:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.908 00:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.908 00:47:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.475 00:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.475 00:47:37 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:25.475 00:47:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.475 00:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.475 00:47:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.734 00:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.734 00:47:38 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:25.734 00:47:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.734 00:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.734 00:47:38 -- common/autotest_common.sh@10 -- # set +x 00:13:25.992 00:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.992 00:47:38 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:25.992 00:47:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.992 00:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.992 00:47:38 -- common/autotest_common.sh@10 -- # set +x 00:13:26.251 00:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.251 00:47:38 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:26.251 00:47:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.251 00:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.251 00:47:38 -- common/autotest_common.sh@10 -- # set +x 00:13:26.510 00:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.510 00:47:38 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:26.510 00:47:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.510 00:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.510 00:47:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.078 00:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.078 00:47:39 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:27.078 00:47:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.078 00:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.078 00:47:39 -- common/autotest_common.sh@10 -- # set +x 00:13:27.336 00:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.336 00:47:39 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:27.336 00:47:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.336 00:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.336 00:47:39 -- common/autotest_common.sh@10 -- # set +x 00:13:27.597 00:47:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.597 00:47:39 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:27.597 00:47:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.597 00:47:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.597 00:47:39 -- common/autotest_common.sh@10 -- # set +x 00:13:27.869 00:47:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.869 00:47:40 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:27.869 00:47:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.869 00:47:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.869 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:13:28.146 00:47:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.146 00:47:40 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:28.146 00:47:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.146 00:47:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.146 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:13:28.425 00:47:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.425 00:47:40 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:28.425 00:47:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.425 00:47:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.425 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:13:28.993 00:47:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.993 00:47:41 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:28.993 00:47:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.993 00:47:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.993 00:47:41 -- common/autotest_common.sh@10 -- # set +x 00:13:29.251 00:47:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.251 00:47:41 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:29.251 00:47:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.251 00:47:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.251 00:47:41 -- common/autotest_common.sh@10 -- # set +x 00:13:29.509 00:47:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.509 00:47:41 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:29.509 00:47:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.509 00:47:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.509 00:47:41 -- common/autotest_common.sh@10 -- # set +x 00:13:29.769 00:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.769 00:47:42 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:29.769 00:47:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.769 00:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.769 00:47:42 -- common/autotest_common.sh@10 -- # set +x 00:13:30.029 00:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.029 00:47:42 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:30.029 00:47:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.029 00:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.029 00:47:42 -- common/autotest_common.sh@10 -- # set +x 00:13:30.288 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.547 00:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.548 00:47:42 -- target/connect_stress.sh@34 -- # kill -0 81929 00:13:30.548 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81929) - No such process 00:13:30.548 00:47:42 -- target/connect_stress.sh@38 -- # wait 81929 00:13:30.548 00:47:42 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:30.548 00:47:42 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:30.548 00:47:42 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:30.548 00:47:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.548 00:47:42 -- nvmf/common.sh@116 -- # sync 00:13:30.548 00:47:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.548 00:47:42 -- nvmf/common.sh@119 -- # set +e 00:13:30.548 00:47:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.548 00:47:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.548 rmmod nvme_tcp 00:13:30.548 rmmod nvme_fabrics 00:13:30.548 rmmod nvme_keyring 00:13:30.548 00:47:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.548 00:47:42 -- nvmf/common.sh@123 -- # set -e 00:13:30.548 00:47:42 -- nvmf/common.sh@124 -- # return 0 00:13:30.548 00:47:42 -- nvmf/common.sh@477 -- # '[' -n 81877 ']' 00:13:30.548 00:47:42 -- nvmf/common.sh@478 -- # killprocess 81877 00:13:30.548 00:47:42 -- common/autotest_common.sh@936 -- # '[' -z 81877 ']' 00:13:30.548 00:47:42 -- common/autotest_common.sh@940 -- # kill -0 81877 00:13:30.548 00:47:42 -- common/autotest_common.sh@941 -- # uname 00:13:30.548 00:47:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.548 00:47:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81877 00:13:30.548 00:47:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:30.548 killing process with pid 81877 00:13:30.548 00:47:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:30.548 00:47:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81877' 00:13:30.548 00:47:42 -- common/autotest_common.sh@955 -- # kill 81877 00:13:30.548 00:47:42 -- common/autotest_common.sh@960 -- # wait 81877 00:13:30.807 00:47:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.807 00:47:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.807 00:47:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.807 00:47:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.807 00:47:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.807 00:47:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.807 00:47:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.807 00:47:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.807 00:47:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:30.807 00:13:30.807 real 0m12.443s 00:13:30.807 user 0m41.423s 00:13:30.807 sys 0m3.058s 00:13:30.807 00:47:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:30.807 00:47:43 -- common/autotest_common.sh@10 -- # set +x 00:13:30.807 ************************************ 00:13:30.807 END TEST nvmf_connect_stress 00:13:30.807 ************************************ 00:13:30.807 00:47:43 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.807 00:47:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:30.807 00:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.807 00:47:43 -- common/autotest_common.sh@10 -- # set +x 00:13:30.807 ************************************ 00:13:30.807 START TEST nvmf_fused_ordering 00:13:30.807 ************************************ 00:13:30.807 00:47:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:31.067 * Looking for test storage... 00:13:31.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:31.067 00:47:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:31.067 00:47:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:31.067 00:47:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:31.067 00:47:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:31.067 00:47:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:31.067 00:47:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:31.067 00:47:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:31.067 00:47:43 -- scripts/common.sh@335 -- # IFS=.-: 00:13:31.067 00:47:43 -- scripts/common.sh@335 -- # read -ra ver1 00:13:31.067 00:47:43 -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.067 00:47:43 -- scripts/common.sh@336 -- # read -ra ver2 00:13:31.067 00:47:43 -- scripts/common.sh@337 -- # local 'op=<' 00:13:31.067 00:47:43 -- scripts/common.sh@339 -- # ver1_l=2 00:13:31.067 00:47:43 -- scripts/common.sh@340 -- # ver2_l=1 00:13:31.067 00:47:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:31.067 00:47:43 -- scripts/common.sh@343 -- # case "$op" in 00:13:31.067 00:47:43 -- scripts/common.sh@344 -- # : 1 00:13:31.067 00:47:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:31.067 00:47:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.067 00:47:43 -- scripts/common.sh@364 -- # decimal 1 00:13:31.067 00:47:43 -- scripts/common.sh@352 -- # local d=1 00:13:31.067 00:47:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.067 00:47:43 -- scripts/common.sh@354 -- # echo 1 00:13:31.067 00:47:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:31.067 00:47:43 -- scripts/common.sh@365 -- # decimal 2 00:13:31.067 00:47:43 -- scripts/common.sh@352 -- # local d=2 00:13:31.067 00:47:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.067 00:47:43 -- scripts/common.sh@354 -- # echo 2 00:13:31.067 00:47:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:31.067 00:47:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:31.067 00:47:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:31.067 00:47:43 -- scripts/common.sh@367 -- # return 0 00:13:31.067 00:47:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.067 00:47:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.067 --rc genhtml_branch_coverage=1 00:13:31.067 --rc genhtml_function_coverage=1 00:13:31.067 --rc genhtml_legend=1 00:13:31.067 --rc geninfo_all_blocks=1 00:13:31.067 --rc geninfo_unexecuted_blocks=1 00:13:31.067 00:13:31.067 ' 00:13:31.067 00:47:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.067 --rc genhtml_branch_coverage=1 00:13:31.067 --rc genhtml_function_coverage=1 00:13:31.067 --rc genhtml_legend=1 00:13:31.067 --rc geninfo_all_blocks=1 00:13:31.067 --rc geninfo_unexecuted_blocks=1 00:13:31.067 00:13:31.067 ' 00:13:31.067 00:47:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.067 --rc genhtml_branch_coverage=1 00:13:31.067 --rc genhtml_function_coverage=1 00:13:31.067 --rc genhtml_legend=1 00:13:31.067 --rc geninfo_all_blocks=1 00:13:31.067 --rc geninfo_unexecuted_blocks=1 00:13:31.067 00:13:31.067 ' 00:13:31.067 00:47:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.067 --rc genhtml_branch_coverage=1 00:13:31.067 --rc genhtml_function_coverage=1 00:13:31.067 --rc genhtml_legend=1 00:13:31.067 --rc geninfo_all_blocks=1 00:13:31.067 --rc geninfo_unexecuted_blocks=1 00:13:31.067 00:13:31.067 ' 00:13:31.067 00:47:43 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:31.067 00:47:43 -- nvmf/common.sh@7 -- # uname -s 00:13:31.067 00:47:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.067 00:47:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.067 00:47:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.067 00:47:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.067 00:47:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.067 00:47:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.067 00:47:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.067 00:47:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.067 00:47:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.067 00:47:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.067 00:47:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:31.067 00:47:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:31.067 00:47:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.067 00:47:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.067 00:47:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:31.067 00:47:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:31.067 00:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.067 00:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.067 00:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.067 00:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.067 00:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.067 00:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.067 00:47:43 -- paths/export.sh@5 -- # export PATH 00:13:31.067 00:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.067 00:47:43 -- nvmf/common.sh@46 -- # : 0 00:13:31.067 00:47:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:31.067 00:47:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:31.067 00:47:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:31.067 00:47:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.067 00:47:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.067 00:47:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:31.067 00:47:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:31.067 00:47:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:31.067 00:47:43 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:31.067 00:47:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:31.068 00:47:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.068 00:47:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:31.068 00:47:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:31.068 00:47:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:31.068 00:47:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.068 00:47:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.068 00:47:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.068 00:47:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:31.068 00:47:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:31.068 00:47:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:31.068 00:47:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:31.068 00:47:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:31.068 00:47:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:31.068 00:47:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.068 00:47:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.068 00:47:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:31.068 00:47:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:31.068 00:47:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:31.068 00:47:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:31.068 00:47:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:31.068 00:47:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.068 00:47:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:31.068 00:47:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:31.068 00:47:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:31.068 00:47:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:31.068 00:47:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:31.068 00:47:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:31.068 Cannot find device "nvmf_tgt_br" 00:13:31.068 00:47:43 -- nvmf/common.sh@154 -- # true 00:13:31.068 00:47:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.068 Cannot find device "nvmf_tgt_br2" 00:13:31.068 00:47:43 -- nvmf/common.sh@155 -- # true 00:13:31.068 00:47:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:31.068 00:47:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:31.068 Cannot find device "nvmf_tgt_br" 00:13:31.068 00:47:43 -- nvmf/common.sh@157 -- # true 00:13:31.068 00:47:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:31.068 Cannot find device "nvmf_tgt_br2" 00:13:31.068 00:47:43 -- nvmf/common.sh@158 -- # true 00:13:31.068 00:47:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:31.327 00:47:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:31.327 00:47:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.327 00:47:43 -- nvmf/common.sh@161 -- # true 00:13:31.327 00:47:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.327 00:47:43 -- nvmf/common.sh@162 -- # true 00:13:31.327 00:47:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:31.327 00:47:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:31.327 00:47:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:31.327 00:47:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:31.327 00:47:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:31.327 00:47:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:31.327 00:47:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:31.327 00:47:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:31.327 00:47:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:31.327 00:47:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:31.327 00:47:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:31.327 00:47:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:31.327 00:47:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:31.327 00:47:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:31.327 00:47:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:31.327 00:47:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:31.327 00:47:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:31.327 00:47:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:31.327 00:47:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:31.327 00:47:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:31.327 00:47:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:31.327 00:47:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:31.327 00:47:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:31.327 00:47:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:31.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:31.328 00:13:31.328 --- 10.0.0.2 ping statistics --- 00:13:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.328 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:31.328 00:47:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:31.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:31.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:31.328 00:13:31.328 --- 10.0.0.3 ping statistics --- 00:13:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.328 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:31.328 00:47:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:31.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:31.328 00:13:31.328 --- 10.0.0.1 ping statistics --- 00:13:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.328 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:31.328 00:47:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.328 00:47:43 -- nvmf/common.sh@421 -- # return 0 00:13:31.328 00:47:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:31.328 00:47:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.328 00:47:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:31.328 00:47:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:31.328 00:47:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.328 00:47:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:31.328 00:47:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:31.328 00:47:43 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:31.328 00:47:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:31.328 00:47:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.328 00:47:43 -- common/autotest_common.sh@10 -- # set +x 00:13:31.328 00:47:43 -- nvmf/common.sh@469 -- # nvmfpid=82272 00:13:31.328 00:47:43 -- nvmf/common.sh@470 -- # waitforlisten 82272 00:13:31.328 00:47:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:31.328 00:47:43 -- common/autotest_common.sh@829 -- # '[' -z 82272 ']' 00:13:31.328 00:47:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.328 00:47:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.328 00:47:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.328 00:47:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.328 00:47:43 -- common/autotest_common.sh@10 -- # set +x 00:13:31.587 [2024-12-03 00:47:43.897739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:31.587 [2024-12-03 00:47:43.897845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.587 [2024-12-03 00:47:44.037979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.587 [2024-12-03 00:47:44.095566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.587 [2024-12-03 00:47:44.095727] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.587 [2024-12-03 00:47:44.095739] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.587 [2024-12-03 00:47:44.095747] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.587 [2024-12-03 00:47:44.095777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.634 00:47:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.634 00:47:44 -- common/autotest_common.sh@862 -- # return 0 00:13:32.634 00:47:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:32.634 00:47:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.634 00:47:44 -- common/autotest_common.sh@10 -- # set +x 00:13:32.634 00:47:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.634 00:47:44 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.634 00:47:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.634 00:47:44 -- common/autotest_common.sh@10 -- # set +x 00:13:32.634 [2024-12-03 00:47:44.993312] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.634 00:47:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.634 00:47:44 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:32.634 00:47:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.634 00:47:44 -- common/autotest_common.sh@10 -- # set +x 00:13:32.634 00:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.635 00:47:45 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.635 00:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.635 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:32.635 [2024-12-03 00:47:45.009490] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.635 00:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.635 00:47:45 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:32.635 00:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.635 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:32.635 NULL1 00:13:32.635 00:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.635 00:47:45 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:32.635 00:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.635 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:32.635 00:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.635 00:47:45 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:32.635 00:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.635 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:32.635 00:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.635 00:47:45 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:32.635 [2024-12-03 00:47:45.055724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:32.636 [2024-12-03 00:47:45.055772] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82322 ] 00:13:33.213 Attached to nqn.2016-06.io.spdk:cnode1 00:13:33.213 Namespace ID: 1 size: 1GB 00:13:33.213 fused_ordering(0) 00:13:33.213 fused_ordering(1) 00:13:33.213 fused_ordering(2) 00:13:33.213 fused_ordering(3) 00:13:33.213 fused_ordering(4) 00:13:33.213 fused_ordering(5) 00:13:33.213 fused_ordering(6) 00:13:33.213 fused_ordering(7) 00:13:33.213 fused_ordering(8) 00:13:33.213 fused_ordering(9) 00:13:33.213 fused_ordering(10) 00:13:33.213 fused_ordering(11) 00:13:33.213 fused_ordering(12) 00:13:33.213 fused_ordering(13) 00:13:33.213 fused_ordering(14) 00:13:33.213 fused_ordering(15) 00:13:33.213 fused_ordering(16) 00:13:33.213 fused_ordering(17) 00:13:33.213 fused_ordering(18) 00:13:33.213 fused_ordering(19) 00:13:33.213 fused_ordering(20) 00:13:33.213 fused_ordering(21) 00:13:33.213 fused_ordering(22) 00:13:33.213 fused_ordering(23) 00:13:33.213 fused_ordering(24) 00:13:33.213 fused_ordering(25) 00:13:33.213 fused_ordering(26) 00:13:33.213 fused_ordering(27) 00:13:33.213 fused_ordering(28) 00:13:33.213 fused_ordering(29) 00:13:33.213 fused_ordering(30) 00:13:33.213 fused_ordering(31) 00:13:33.213 fused_ordering(32) 00:13:33.213 fused_ordering(33) 00:13:33.213 fused_ordering(34) 00:13:33.213 fused_ordering(35) 00:13:33.213 fused_ordering(36) 00:13:33.213 fused_ordering(37) 00:13:33.213 fused_ordering(38) 00:13:33.213 fused_ordering(39) 00:13:33.213 fused_ordering(40) 00:13:33.213 fused_ordering(41) 00:13:33.213 fused_ordering(42) 00:13:33.213 fused_ordering(43) 00:13:33.213 fused_ordering(44) 00:13:33.213 fused_ordering(45) 00:13:33.213 fused_ordering(46) 00:13:33.213 fused_ordering(47) 00:13:33.213 fused_ordering(48) 00:13:33.213 fused_ordering(49) 00:13:33.213 fused_ordering(50) 00:13:33.213 fused_ordering(51) 00:13:33.213 fused_ordering(52) 00:13:33.213 fused_ordering(53) 00:13:33.213 fused_ordering(54) 00:13:33.213 fused_ordering(55) 00:13:33.213 fused_ordering(56) 00:13:33.213 fused_ordering(57) 00:13:33.213 fused_ordering(58) 00:13:33.213 fused_ordering(59) 00:13:33.213 fused_ordering(60) 00:13:33.213 fused_ordering(61) 00:13:33.213 fused_ordering(62) 00:13:33.213 fused_ordering(63) 00:13:33.213 fused_ordering(64) 00:13:33.213 fused_ordering(65) 00:13:33.213 fused_ordering(66) 00:13:33.213 fused_ordering(67) 00:13:33.213 fused_ordering(68) 00:13:33.213 fused_ordering(69) 00:13:33.213 fused_ordering(70) 00:13:33.213 fused_ordering(71) 00:13:33.213 fused_ordering(72) 00:13:33.213 fused_ordering(73) 00:13:33.213 fused_ordering(74) 00:13:33.213 fused_ordering(75) 00:13:33.213 fused_ordering(76) 00:13:33.213 fused_ordering(77) 00:13:33.213 fused_ordering(78) 00:13:33.213 fused_ordering(79) 00:13:33.213 fused_ordering(80) 00:13:33.213 fused_ordering(81) 00:13:33.213 fused_ordering(82) 00:13:33.213 fused_ordering(83) 00:13:33.213 fused_ordering(84) 00:13:33.213 fused_ordering(85) 00:13:33.213 fused_ordering(86) 00:13:33.213 fused_ordering(87) 00:13:33.213 fused_ordering(88) 00:13:33.213 fused_ordering(89) 00:13:33.213 fused_ordering(90) 00:13:33.213 fused_ordering(91) 00:13:33.213 fused_ordering(92) 00:13:33.213 fused_ordering(93) 00:13:33.213 fused_ordering(94) 00:13:33.213 fused_ordering(95) 00:13:33.213 fused_ordering(96) 00:13:33.213 fused_ordering(97) 00:13:33.213 fused_ordering(98) 00:13:33.213 fused_ordering(99) 00:13:33.214 fused_ordering(100) 00:13:33.214 fused_ordering(101) 00:13:33.214 fused_ordering(102) 00:13:33.214 fused_ordering(103) 00:13:33.214 fused_ordering(104) 00:13:33.214 fused_ordering(105) 00:13:33.214 fused_ordering(106) 00:13:33.214 fused_ordering(107) 00:13:33.214 fused_ordering(108) 00:13:33.214 fused_ordering(109) 00:13:33.214 fused_ordering(110) 00:13:33.214 fused_ordering(111) 00:13:33.214 fused_ordering(112) 00:13:33.214 fused_ordering(113) 00:13:33.214 fused_ordering(114) 00:13:33.214 fused_ordering(115) 00:13:33.214 fused_ordering(116) 00:13:33.214 fused_ordering(117) 00:13:33.214 fused_ordering(118) 00:13:33.214 fused_ordering(119) 00:13:33.214 fused_ordering(120) 00:13:33.214 fused_ordering(121) 00:13:33.214 fused_ordering(122) 00:13:33.214 fused_ordering(123) 00:13:33.214 fused_ordering(124) 00:13:33.214 fused_ordering(125) 00:13:33.214 fused_ordering(126) 00:13:33.214 fused_ordering(127) 00:13:33.214 fused_ordering(128) 00:13:33.214 fused_ordering(129) 00:13:33.214 fused_ordering(130) 00:13:33.214 fused_ordering(131) 00:13:33.214 fused_ordering(132) 00:13:33.214 fused_ordering(133) 00:13:33.214 fused_ordering(134) 00:13:33.214 fused_ordering(135) 00:13:33.214 fused_ordering(136) 00:13:33.214 fused_ordering(137) 00:13:33.214 fused_ordering(138) 00:13:33.214 fused_ordering(139) 00:13:33.214 fused_ordering(140) 00:13:33.214 fused_ordering(141) 00:13:33.214 fused_ordering(142) 00:13:33.214 fused_ordering(143) 00:13:33.214 fused_ordering(144) 00:13:33.214 fused_ordering(145) 00:13:33.214 fused_ordering(146) 00:13:33.214 fused_ordering(147) 00:13:33.214 fused_ordering(148) 00:13:33.214 fused_ordering(149) 00:13:33.214 fused_ordering(150) 00:13:33.214 fused_ordering(151) 00:13:33.214 fused_ordering(152) 00:13:33.214 fused_ordering(153) 00:13:33.214 fused_ordering(154) 00:13:33.214 fused_ordering(155) 00:13:33.214 fused_ordering(156) 00:13:33.214 fused_ordering(157) 00:13:33.214 fused_ordering(158) 00:13:33.214 fused_ordering(159) 00:13:33.214 fused_ordering(160) 00:13:33.214 fused_ordering(161) 00:13:33.214 fused_ordering(162) 00:13:33.214 fused_ordering(163) 00:13:33.214 fused_ordering(164) 00:13:33.214 fused_ordering(165) 00:13:33.214 fused_ordering(166) 00:13:33.214 fused_ordering(167) 00:13:33.214 fused_ordering(168) 00:13:33.214 fused_ordering(169) 00:13:33.214 fused_ordering(170) 00:13:33.214 fused_ordering(171) 00:13:33.214 fused_ordering(172) 00:13:33.214 fused_ordering(173) 00:13:33.214 fused_ordering(174) 00:13:33.214 fused_ordering(175) 00:13:33.214 fused_ordering(176) 00:13:33.214 fused_ordering(177) 00:13:33.214 fused_ordering(178) 00:13:33.214 fused_ordering(179) 00:13:33.214 fused_ordering(180) 00:13:33.214 fused_ordering(181) 00:13:33.214 fused_ordering(182) 00:13:33.214 fused_ordering(183) 00:13:33.214 fused_ordering(184) 00:13:33.214 fused_ordering(185) 00:13:33.214 fused_ordering(186) 00:13:33.214 fused_ordering(187) 00:13:33.214 fused_ordering(188) 00:13:33.214 fused_ordering(189) 00:13:33.214 fused_ordering(190) 00:13:33.214 fused_ordering(191) 00:13:33.214 fused_ordering(192) 00:13:33.214 fused_ordering(193) 00:13:33.214 fused_ordering(194) 00:13:33.214 fused_ordering(195) 00:13:33.214 fused_ordering(196) 00:13:33.214 fused_ordering(197) 00:13:33.214 fused_ordering(198) 00:13:33.214 fused_ordering(199) 00:13:33.214 fused_ordering(200) 00:13:33.214 fused_ordering(201) 00:13:33.214 fused_ordering(202) 00:13:33.214 fused_ordering(203) 00:13:33.214 fused_ordering(204) 00:13:33.214 fused_ordering(205) 00:13:33.214 fused_ordering(206) 00:13:33.214 fused_ordering(207) 00:13:33.214 fused_ordering(208) 00:13:33.214 fused_ordering(209) 00:13:33.214 fused_ordering(210) 00:13:33.214 fused_ordering(211) 00:13:33.214 fused_ordering(212) 00:13:33.214 fused_ordering(213) 00:13:33.214 fused_ordering(214) 00:13:33.214 fused_ordering(215) 00:13:33.214 fused_ordering(216) 00:13:33.214 fused_ordering(217) 00:13:33.214 fused_ordering(218) 00:13:33.214 fused_ordering(219) 00:13:33.214 fused_ordering(220) 00:13:33.214 fused_ordering(221) 00:13:33.214 fused_ordering(222) 00:13:33.214 fused_ordering(223) 00:13:33.214 fused_ordering(224) 00:13:33.214 fused_ordering(225) 00:13:33.214 fused_ordering(226) 00:13:33.214 fused_ordering(227) 00:13:33.214 fused_ordering(228) 00:13:33.214 fused_ordering(229) 00:13:33.214 fused_ordering(230) 00:13:33.214 fused_ordering(231) 00:13:33.214 fused_ordering(232) 00:13:33.214 fused_ordering(233) 00:13:33.214 fused_ordering(234) 00:13:33.214 fused_ordering(235) 00:13:33.214 fused_ordering(236) 00:13:33.214 fused_ordering(237) 00:13:33.214 fused_ordering(238) 00:13:33.214 fused_ordering(239) 00:13:33.214 fused_ordering(240) 00:13:33.214 fused_ordering(241) 00:13:33.214 fused_ordering(242) 00:13:33.214 fused_ordering(243) 00:13:33.214 fused_ordering(244) 00:13:33.214 fused_ordering(245) 00:13:33.214 fused_ordering(246) 00:13:33.214 fused_ordering(247) 00:13:33.214 fused_ordering(248) 00:13:33.214 fused_ordering(249) 00:13:33.214 fused_ordering(250) 00:13:33.214 fused_ordering(251) 00:13:33.214 fused_ordering(252) 00:13:33.214 fused_ordering(253) 00:13:33.214 fused_ordering(254) 00:13:33.214 fused_ordering(255) 00:13:33.214 fused_ordering(256) 00:13:33.214 fused_ordering(257) 00:13:33.214 fused_ordering(258) 00:13:33.214 fused_ordering(259) 00:13:33.214 fused_ordering(260) 00:13:33.214 fused_ordering(261) 00:13:33.214 fused_ordering(262) 00:13:33.214 fused_ordering(263) 00:13:33.214 fused_ordering(264) 00:13:33.214 fused_ordering(265) 00:13:33.214 fused_ordering(266) 00:13:33.214 fused_ordering(267) 00:13:33.214 fused_ordering(268) 00:13:33.214 fused_ordering(269) 00:13:33.214 fused_ordering(270) 00:13:33.214 fused_ordering(271) 00:13:33.214 fused_ordering(272) 00:13:33.214 fused_ordering(273) 00:13:33.214 fused_ordering(274) 00:13:33.214 fused_ordering(275) 00:13:33.214 fused_ordering(276) 00:13:33.214 fused_ordering(277) 00:13:33.214 fused_ordering(278) 00:13:33.214 fused_ordering(279) 00:13:33.214 fused_ordering(280) 00:13:33.214 fused_ordering(281) 00:13:33.214 fused_ordering(282) 00:13:33.214 fused_ordering(283) 00:13:33.214 fused_ordering(284) 00:13:33.214 fused_ordering(285) 00:13:33.214 fused_ordering(286) 00:13:33.214 fused_ordering(287) 00:13:33.214 fused_ordering(288) 00:13:33.214 fused_ordering(289) 00:13:33.214 fused_ordering(290) 00:13:33.214 fused_ordering(291) 00:13:33.214 fused_ordering(292) 00:13:33.214 fused_ordering(293) 00:13:33.214 fused_ordering(294) 00:13:33.214 fused_ordering(295) 00:13:33.214 fused_ordering(296) 00:13:33.214 fused_ordering(297) 00:13:33.214 fused_ordering(298) 00:13:33.214 fused_ordering(299) 00:13:33.214 fused_ordering(300) 00:13:33.214 fused_ordering(301) 00:13:33.214 fused_ordering(302) 00:13:33.214 fused_ordering(303) 00:13:33.214 fused_ordering(304) 00:13:33.214 fused_ordering(305) 00:13:33.214 fused_ordering(306) 00:13:33.214 fused_ordering(307) 00:13:33.214 fused_ordering(308) 00:13:33.214 fused_ordering(309) 00:13:33.214 fused_ordering(310) 00:13:33.214 fused_ordering(311) 00:13:33.214 fused_ordering(312) 00:13:33.214 fused_ordering(313) 00:13:33.214 fused_ordering(314) 00:13:33.214 fused_ordering(315) 00:13:33.214 fused_ordering(316) 00:13:33.214 fused_ordering(317) 00:13:33.214 fused_ordering(318) 00:13:33.214 fused_ordering(319) 00:13:33.214 fused_ordering(320) 00:13:33.214 fused_ordering(321) 00:13:33.214 fused_ordering(322) 00:13:33.214 fused_ordering(323) 00:13:33.214 fused_ordering(324) 00:13:33.214 fused_ordering(325) 00:13:33.214 fused_ordering(326) 00:13:33.214 fused_ordering(327) 00:13:33.214 fused_ordering(328) 00:13:33.214 fused_ordering(329) 00:13:33.214 fused_ordering(330) 00:13:33.214 fused_ordering(331) 00:13:33.214 fused_ordering(332) 00:13:33.214 fused_ordering(333) 00:13:33.214 fused_ordering(334) 00:13:33.214 fused_ordering(335) 00:13:33.214 fused_ordering(336) 00:13:33.214 fused_ordering(337) 00:13:33.214 fused_ordering(338) 00:13:33.214 fused_ordering(339) 00:13:33.214 fused_ordering(340) 00:13:33.214 fused_ordering(341) 00:13:33.214 fused_ordering(342) 00:13:33.214 fused_ordering(343) 00:13:33.214 fused_ordering(344) 00:13:33.214 fused_ordering(345) 00:13:33.214 fused_ordering(346) 00:13:33.214 fused_ordering(347) 00:13:33.214 fused_ordering(348) 00:13:33.214 fused_ordering(349) 00:13:33.214 fused_ordering(350) 00:13:33.214 fused_ordering(351) 00:13:33.214 fused_ordering(352) 00:13:33.214 fused_ordering(353) 00:13:33.214 fused_ordering(354) 00:13:33.214 fused_ordering(355) 00:13:33.214 fused_ordering(356) 00:13:33.215 fused_ordering(357) 00:13:33.215 fused_ordering(358) 00:13:33.215 fused_ordering(359) 00:13:33.215 fused_ordering(360) 00:13:33.215 fused_ordering(361) 00:13:33.215 fused_ordering(362) 00:13:33.215 fused_ordering(363) 00:13:33.215 fused_ordering(364) 00:13:33.215 fused_ordering(365) 00:13:33.215 fused_ordering(366) 00:13:33.215 fused_ordering(367) 00:13:33.215 fused_ordering(368) 00:13:33.215 fused_ordering(369) 00:13:33.215 fused_ordering(370) 00:13:33.215 fused_ordering(371) 00:13:33.215 fused_ordering(372) 00:13:33.215 fused_ordering(373) 00:13:33.215 fused_ordering(374) 00:13:33.215 fused_ordering(375) 00:13:33.215 fused_ordering(376) 00:13:33.215 fused_ordering(377) 00:13:33.215 fused_ordering(378) 00:13:33.215 fused_ordering(379) 00:13:33.215 fused_ordering(380) 00:13:33.215 fused_ordering(381) 00:13:33.215 fused_ordering(382) 00:13:33.215 fused_ordering(383) 00:13:33.215 fused_ordering(384) 00:13:33.215 fused_ordering(385) 00:13:33.215 fused_ordering(386) 00:13:33.215 fused_ordering(387) 00:13:33.215 fused_ordering(388) 00:13:33.215 fused_ordering(389) 00:13:33.215 fused_ordering(390) 00:13:33.215 fused_ordering(391) 00:13:33.215 fused_ordering(392) 00:13:33.215 fused_ordering(393) 00:13:33.215 fused_ordering(394) 00:13:33.215 fused_ordering(395) 00:13:33.215 fused_ordering(396) 00:13:33.215 fused_ordering(397) 00:13:33.215 fused_ordering(398) 00:13:33.215 fused_ordering(399) 00:13:33.215 fused_ordering(400) 00:13:33.215 fused_ordering(401) 00:13:33.215 fused_ordering(402) 00:13:33.215 fused_ordering(403) 00:13:33.215 fused_ordering(404) 00:13:33.215 fused_ordering(405) 00:13:33.215 fused_ordering(406) 00:13:33.215 fused_ordering(407) 00:13:33.215 fused_ordering(408) 00:13:33.215 fused_ordering(409) 00:13:33.215 fused_ordering(410) 00:13:33.782 fused_ordering(411) 00:13:33.782 fused_ordering(412) 00:13:33.782 fused_ordering(413) 00:13:33.782 fused_ordering(414) 00:13:33.782 fused_ordering(415) 00:13:33.782 fused_ordering(416) 00:13:33.782 fused_ordering(417) 00:13:33.782 fused_ordering(418) 00:13:33.782 fused_ordering(419) 00:13:33.782 fused_ordering(420) 00:13:33.782 fused_ordering(421) 00:13:33.782 fused_ordering(422) 00:13:33.782 fused_ordering(423) 00:13:33.782 fused_ordering(424) 00:13:33.782 fused_ordering(425) 00:13:33.782 fused_ordering(426) 00:13:33.782 fused_ordering(427) 00:13:33.782 fused_ordering(428) 00:13:33.782 fused_ordering(429) 00:13:33.782 fused_ordering(430) 00:13:33.782 fused_ordering(431) 00:13:33.782 fused_ordering(432) 00:13:33.782 fused_ordering(433) 00:13:33.782 fused_ordering(434) 00:13:33.782 fused_ordering(435) 00:13:33.782 fused_ordering(436) 00:13:33.782 fused_ordering(437) 00:13:33.782 fused_ordering(438) 00:13:33.782 fused_ordering(439) 00:13:33.782 fused_ordering(440) 00:13:33.782 fused_ordering(441) 00:13:33.782 fused_ordering(442) 00:13:33.782 fused_ordering(443) 00:13:33.782 fused_ordering(444) 00:13:33.782 fused_ordering(445) 00:13:33.782 fused_ordering(446) 00:13:33.782 fused_ordering(447) 00:13:33.782 fused_ordering(448) 00:13:33.782 fused_ordering(449) 00:13:33.782 fused_ordering(450) 00:13:33.782 fused_ordering(451) 00:13:33.782 fused_ordering(452) 00:13:33.782 fused_ordering(453) 00:13:33.782 fused_ordering(454) 00:13:33.782 fused_ordering(455) 00:13:33.782 fused_ordering(456) 00:13:33.782 fused_ordering(457) 00:13:33.782 fused_ordering(458) 00:13:33.782 fused_ordering(459) 00:13:33.782 fused_ordering(460) 00:13:33.782 fused_ordering(461) 00:13:33.782 fused_ordering(462) 00:13:33.782 fused_ordering(463) 00:13:33.782 fused_ordering(464) 00:13:33.782 fused_ordering(465) 00:13:33.782 fused_ordering(466) 00:13:33.782 fused_ordering(467) 00:13:33.782 fused_ordering(468) 00:13:33.782 fused_ordering(469) 00:13:33.782 fused_ordering(470) 00:13:33.782 fused_ordering(471) 00:13:33.782 fused_ordering(472) 00:13:33.782 fused_ordering(473) 00:13:33.782 fused_ordering(474) 00:13:33.782 fused_ordering(475) 00:13:33.782 fused_ordering(476) 00:13:33.782 fused_ordering(477) 00:13:33.782 fused_ordering(478) 00:13:33.782 fused_ordering(479) 00:13:33.782 fused_ordering(480) 00:13:33.782 fused_ordering(481) 00:13:33.782 fused_ordering(482) 00:13:33.782 fused_ordering(483) 00:13:33.782 fused_ordering(484) 00:13:33.782 fused_ordering(485) 00:13:33.782 fused_ordering(486) 00:13:33.782 fused_ordering(487) 00:13:33.782 fused_ordering(488) 00:13:33.782 fused_ordering(489) 00:13:33.782 fused_ordering(490) 00:13:33.782 fused_ordering(491) 00:13:33.782 fused_ordering(492) 00:13:33.782 fused_ordering(493) 00:13:33.782 fused_ordering(494) 00:13:33.782 fused_ordering(495) 00:13:33.782 fused_ordering(496) 00:13:33.782 fused_ordering(497) 00:13:33.782 fused_ordering(498) 00:13:33.782 fused_ordering(499) 00:13:33.782 fused_ordering(500) 00:13:33.782 fused_ordering(501) 00:13:33.782 fused_ordering(502) 00:13:33.782 fused_ordering(503) 00:13:33.782 fused_ordering(504) 00:13:33.782 fused_ordering(505) 00:13:33.782 fused_ordering(506) 00:13:33.782 fused_ordering(507) 00:13:33.782 fused_ordering(508) 00:13:33.782 fused_ordering(509) 00:13:33.782 fused_ordering(510) 00:13:33.782 fused_ordering(511) 00:13:33.782 fused_ordering(512) 00:13:33.782 fused_ordering(513) 00:13:33.782 fused_ordering(514) 00:13:33.782 fused_ordering(515) 00:13:33.782 fused_ordering(516) 00:13:33.782 fused_ordering(517) 00:13:33.782 fused_ordering(518) 00:13:33.782 fused_ordering(519) 00:13:33.782 fused_ordering(520) 00:13:33.782 fused_ordering(521) 00:13:33.782 fused_ordering(522) 00:13:33.782 fused_ordering(523) 00:13:33.782 fused_ordering(524) 00:13:33.782 fused_ordering(525) 00:13:33.782 fused_ordering(526) 00:13:33.782 fused_ordering(527) 00:13:33.782 fused_ordering(528) 00:13:33.782 fused_ordering(529) 00:13:33.782 fused_ordering(530) 00:13:33.782 fused_ordering(531) 00:13:33.782 fused_ordering(532) 00:13:33.782 fused_ordering(533) 00:13:33.782 fused_ordering(534) 00:13:33.782 fused_ordering(535) 00:13:33.782 fused_ordering(536) 00:13:33.782 fused_ordering(537) 00:13:33.782 fused_ordering(538) 00:13:33.782 fused_ordering(539) 00:13:33.782 fused_ordering(540) 00:13:33.782 fused_ordering(541) 00:13:33.782 fused_ordering(542) 00:13:33.782 fused_ordering(543) 00:13:33.782 fused_ordering(544) 00:13:33.782 fused_ordering(545) 00:13:33.782 fused_ordering(546) 00:13:33.782 fused_ordering(547) 00:13:33.782 fused_ordering(548) 00:13:33.782 fused_ordering(549) 00:13:33.782 fused_ordering(550) 00:13:33.782 fused_ordering(551) 00:13:33.782 fused_ordering(552) 00:13:33.782 fused_ordering(553) 00:13:33.782 fused_ordering(554) 00:13:33.782 fused_ordering(555) 00:13:33.782 fused_ordering(556) 00:13:33.782 fused_ordering(557) 00:13:33.782 fused_ordering(558) 00:13:33.782 fused_ordering(559) 00:13:33.782 fused_ordering(560) 00:13:33.782 fused_ordering(561) 00:13:33.782 fused_ordering(562) 00:13:33.782 fused_ordering(563) 00:13:33.782 fused_ordering(564) 00:13:33.782 fused_ordering(565) 00:13:33.782 fused_ordering(566) 00:13:33.782 fused_ordering(567) 00:13:33.782 fused_ordering(568) 00:13:33.782 fused_ordering(569) 00:13:33.782 fused_ordering(570) 00:13:33.782 fused_ordering(571) 00:13:33.782 fused_ordering(572) 00:13:33.782 fused_ordering(573) 00:13:33.782 fused_ordering(574) 00:13:33.782 fused_ordering(575) 00:13:33.782 fused_ordering(576) 00:13:33.782 fused_ordering(577) 00:13:33.782 fused_ordering(578) 00:13:33.782 fused_ordering(579) 00:13:33.782 fused_ordering(580) 00:13:33.782 fused_ordering(581) 00:13:33.782 fused_ordering(582) 00:13:33.782 fused_ordering(583) 00:13:33.782 fused_ordering(584) 00:13:33.782 fused_ordering(585) 00:13:33.782 fused_ordering(586) 00:13:33.782 fused_ordering(587) 00:13:33.782 fused_ordering(588) 00:13:33.782 fused_ordering(589) 00:13:33.782 fused_ordering(590) 00:13:33.782 fused_ordering(591) 00:13:33.782 fused_ordering(592) 00:13:33.782 fused_ordering(593) 00:13:33.782 fused_ordering(594) 00:13:33.782 fused_ordering(595) 00:13:33.782 fused_ordering(596) 00:13:33.782 fused_ordering(597) 00:13:33.782 fused_ordering(598) 00:13:33.782 fused_ordering(599) 00:13:33.782 fused_ordering(600) 00:13:33.782 fused_ordering(601) 00:13:33.782 fused_ordering(602) 00:13:33.782 fused_ordering(603) 00:13:33.782 fused_ordering(604) 00:13:33.782 fused_ordering(605) 00:13:33.782 fused_ordering(606) 00:13:33.782 fused_ordering(607) 00:13:33.782 fused_ordering(608) 00:13:33.782 fused_ordering(609) 00:13:33.782 fused_ordering(610) 00:13:33.782 fused_ordering(611) 00:13:33.782 fused_ordering(612) 00:13:33.782 fused_ordering(613) 00:13:33.782 fused_ordering(614) 00:13:33.782 fused_ordering(615) 00:13:34.047 fused_ordering(616) 00:13:34.047 fused_ordering(617) 00:13:34.047 fused_ordering(618) 00:13:34.047 fused_ordering(619) 00:13:34.047 fused_ordering(620) 00:13:34.047 fused_ordering(621) 00:13:34.047 fused_ordering(622) 00:13:34.047 fused_ordering(623) 00:13:34.047 fused_ordering(624) 00:13:34.047 fused_ordering(625) 00:13:34.047 fused_ordering(626) 00:13:34.047 fused_ordering(627) 00:13:34.047 fused_ordering(628) 00:13:34.047 fused_ordering(629) 00:13:34.047 fused_ordering(630) 00:13:34.047 fused_ordering(631) 00:13:34.047 fused_ordering(632) 00:13:34.047 fused_ordering(633) 00:13:34.047 fused_ordering(634) 00:13:34.047 fused_ordering(635) 00:13:34.047 fused_ordering(636) 00:13:34.047 fused_ordering(637) 00:13:34.047 fused_ordering(638) 00:13:34.047 fused_ordering(639) 00:13:34.047 fused_ordering(640) 00:13:34.047 fused_ordering(641) 00:13:34.047 fused_ordering(642) 00:13:34.047 fused_ordering(643) 00:13:34.047 fused_ordering(644) 00:13:34.047 fused_ordering(645) 00:13:34.047 fused_ordering(646) 00:13:34.047 fused_ordering(647) 00:13:34.047 fused_ordering(648) 00:13:34.047 fused_ordering(649) 00:13:34.047 fused_ordering(650) 00:13:34.047 fused_ordering(651) 00:13:34.047 fused_ordering(652) 00:13:34.047 fused_ordering(653) 00:13:34.047 fused_ordering(654) 00:13:34.048 fused_ordering(655) 00:13:34.048 fused_ordering(656) 00:13:34.048 fused_ordering(657) 00:13:34.048 fused_ordering(658) 00:13:34.048 fused_ordering(659) 00:13:34.048 fused_ordering(660) 00:13:34.048 fused_ordering(661) 00:13:34.048 fused_ordering(662) 00:13:34.048 fused_ordering(663) 00:13:34.048 fused_ordering(664) 00:13:34.048 fused_ordering(665) 00:13:34.048 fused_ordering(666) 00:13:34.048 fused_ordering(667) 00:13:34.048 fused_ordering(668) 00:13:34.048 fused_ordering(669) 00:13:34.048 fused_ordering(670) 00:13:34.048 fused_ordering(671) 00:13:34.048 fused_ordering(672) 00:13:34.048 fused_ordering(673) 00:13:34.048 fused_ordering(674) 00:13:34.048 fused_ordering(675) 00:13:34.048 fused_ordering(676) 00:13:34.048 fused_ordering(677) 00:13:34.048 fused_ordering(678) 00:13:34.048 fused_ordering(679) 00:13:34.048 fused_ordering(680) 00:13:34.048 fused_ordering(681) 00:13:34.048 fused_ordering(682) 00:13:34.048 fused_ordering(683) 00:13:34.048 fused_ordering(684) 00:13:34.048 fused_ordering(685) 00:13:34.048 fused_ordering(686) 00:13:34.048 fused_ordering(687) 00:13:34.048 fused_ordering(688) 00:13:34.048 fused_ordering(689) 00:13:34.048 fused_ordering(690) 00:13:34.048 fused_ordering(691) 00:13:34.048 fused_ordering(692) 00:13:34.048 fused_ordering(693) 00:13:34.048 fused_ordering(694) 00:13:34.048 fused_ordering(695) 00:13:34.048 fused_ordering(696) 00:13:34.048 fused_ordering(697) 00:13:34.048 fused_ordering(698) 00:13:34.048 fused_ordering(699) 00:13:34.048 fused_ordering(700) 00:13:34.048 fused_ordering(701) 00:13:34.048 fused_ordering(702) 00:13:34.048 fused_ordering(703) 00:13:34.048 fused_ordering(704) 00:13:34.048 fused_ordering(705) 00:13:34.048 fused_ordering(706) 00:13:34.048 fused_ordering(707) 00:13:34.048 fused_ordering(708) 00:13:34.048 fused_ordering(709) 00:13:34.048 fused_ordering(710) 00:13:34.048 fused_ordering(711) 00:13:34.048 fused_ordering(712) 00:13:34.048 fused_ordering(713) 00:13:34.048 fused_ordering(714) 00:13:34.048 fused_ordering(715) 00:13:34.048 fused_ordering(716) 00:13:34.048 fused_ordering(717) 00:13:34.048 fused_ordering(718) 00:13:34.048 fused_ordering(719) 00:13:34.048 fused_ordering(720) 00:13:34.048 fused_ordering(721) 00:13:34.048 fused_ordering(722) 00:13:34.048 fused_ordering(723) 00:13:34.048 fused_ordering(724) 00:13:34.048 fused_ordering(725) 00:13:34.048 fused_ordering(726) 00:13:34.048 fused_ordering(727) 00:13:34.048 fused_ordering(728) 00:13:34.048 fused_ordering(729) 00:13:34.048 fused_ordering(730) 00:13:34.048 fused_ordering(731) 00:13:34.048 fused_ordering(732) 00:13:34.048 fused_ordering(733) 00:13:34.048 fused_ordering(734) 00:13:34.048 fused_ordering(735) 00:13:34.048 fused_ordering(736) 00:13:34.048 fused_ordering(737) 00:13:34.048 fused_ordering(738) 00:13:34.048 fused_ordering(739) 00:13:34.048 fused_ordering(740) 00:13:34.048 fused_ordering(741) 00:13:34.048 fused_ordering(742) 00:13:34.048 fused_ordering(743) 00:13:34.048 fused_ordering(744) 00:13:34.048 fused_ordering(745) 00:13:34.048 fused_ordering(746) 00:13:34.048 fused_ordering(747) 00:13:34.048 fused_ordering(748) 00:13:34.048 fused_ordering(749) 00:13:34.048 fused_ordering(750) 00:13:34.048 fused_ordering(751) 00:13:34.048 fused_ordering(752) 00:13:34.048 fused_ordering(753) 00:13:34.048 fused_ordering(754) 00:13:34.048 fused_ordering(755) 00:13:34.048 fused_ordering(756) 00:13:34.048 fused_ordering(757) 00:13:34.048 fused_ordering(758) 00:13:34.048 fused_ordering(759) 00:13:34.048 fused_ordering(760) 00:13:34.048 fused_ordering(761) 00:13:34.048 fused_ordering(762) 00:13:34.048 fused_ordering(763) 00:13:34.048 fused_ordering(764) 00:13:34.048 fused_ordering(765) 00:13:34.048 fused_ordering(766) 00:13:34.048 fused_ordering(767) 00:13:34.048 fused_ordering(768) 00:13:34.048 fused_ordering(769) 00:13:34.048 fused_ordering(770) 00:13:34.048 fused_ordering(771) 00:13:34.048 fused_ordering(772) 00:13:34.048 fused_ordering(773) 00:13:34.048 fused_ordering(774) 00:13:34.048 fused_ordering(775) 00:13:34.048 fused_ordering(776) 00:13:34.048 fused_ordering(777) 00:13:34.048 fused_ordering(778) 00:13:34.048 fused_ordering(779) 00:13:34.048 fused_ordering(780) 00:13:34.048 fused_ordering(781) 00:13:34.048 fused_ordering(782) 00:13:34.048 fused_ordering(783) 00:13:34.048 fused_ordering(784) 00:13:34.048 fused_ordering(785) 00:13:34.048 fused_ordering(786) 00:13:34.048 fused_ordering(787) 00:13:34.048 fused_ordering(788) 00:13:34.048 fused_ordering(789) 00:13:34.048 fused_ordering(790) 00:13:34.048 fused_ordering(791) 00:13:34.048 fused_ordering(792) 00:13:34.048 fused_ordering(793) 00:13:34.048 fused_ordering(794) 00:13:34.048 fused_ordering(795) 00:13:34.048 fused_ordering(796) 00:13:34.048 fused_ordering(797) 00:13:34.048 fused_ordering(798) 00:13:34.048 fused_ordering(799) 00:13:34.048 fused_ordering(800) 00:13:34.048 fused_ordering(801) 00:13:34.048 fused_ordering(802) 00:13:34.048 fused_ordering(803) 00:13:34.048 fused_ordering(804) 00:13:34.048 fused_ordering(805) 00:13:34.048 fused_ordering(806) 00:13:34.048 fused_ordering(807) 00:13:34.048 fused_ordering(808) 00:13:34.048 fused_ordering(809) 00:13:34.048 fused_ordering(810) 00:13:34.048 fused_ordering(811) 00:13:34.048 fused_ordering(812) 00:13:34.048 fused_ordering(813) 00:13:34.048 fused_ordering(814) 00:13:34.048 fused_ordering(815) 00:13:34.048 fused_ordering(816) 00:13:34.048 fused_ordering(817) 00:13:34.048 fused_ordering(818) 00:13:34.048 fused_ordering(819) 00:13:34.048 fused_ordering(820) 00:13:34.615 fused_ordering(821) 00:13:34.615 fused_ordering(822) 00:13:34.615 fused_ordering(823) 00:13:34.615 fused_ordering(824) 00:13:34.615 fused_ordering(825) 00:13:34.615 fused_ordering(826) 00:13:34.615 fused_ordering(827) 00:13:34.615 fused_ordering(828) 00:13:34.615 fused_ordering(829) 00:13:34.615 fused_ordering(830) 00:13:34.615 fused_ordering(831) 00:13:34.615 fused_ordering(832) 00:13:34.615 fused_ordering(833) 00:13:34.615 fused_ordering(834) 00:13:34.615 fused_ordering(835) 00:13:34.615 fused_ordering(836) 00:13:34.615 fused_ordering(837) 00:13:34.615 fused_ordering(838) 00:13:34.615 fused_ordering(839) 00:13:34.615 fused_ordering(840) 00:13:34.615 fused_ordering(841) 00:13:34.615 fused_ordering(842) 00:13:34.615 fused_ordering(843) 00:13:34.615 fused_ordering(844) 00:13:34.615 fused_ordering(845) 00:13:34.615 fused_ordering(846) 00:13:34.615 fused_ordering(847) 00:13:34.615 fused_ordering(848) 00:13:34.615 fused_ordering(849) 00:13:34.615 fused_ordering(850) 00:13:34.615 fused_ordering(851) 00:13:34.615 fused_ordering(852) 00:13:34.615 fused_ordering(853) 00:13:34.615 fused_ordering(854) 00:13:34.615 fused_ordering(855) 00:13:34.615 fused_ordering(856) 00:13:34.615 fused_ordering(857) 00:13:34.615 fused_ordering(858) 00:13:34.615 fused_ordering(859) 00:13:34.615 fused_ordering(860) 00:13:34.615 fused_ordering(861) 00:13:34.615 fused_ordering(862) 00:13:34.615 fused_ordering(863) 00:13:34.615 fused_ordering(864) 00:13:34.615 fused_ordering(865) 00:13:34.615 fused_ordering(866) 00:13:34.615 fused_ordering(867) 00:13:34.615 fused_ordering(868) 00:13:34.615 fused_ordering(869) 00:13:34.615 fused_ordering(870) 00:13:34.615 fused_ordering(871) 00:13:34.615 fused_ordering(872) 00:13:34.615 fused_ordering(873) 00:13:34.615 fused_ordering(874) 00:13:34.615 fused_ordering(875) 00:13:34.615 fused_ordering(876) 00:13:34.615 fused_ordering(877) 00:13:34.615 fused_ordering(878) 00:13:34.615 fused_ordering(879) 00:13:34.615 fused_ordering(880) 00:13:34.615 fused_ordering(881) 00:13:34.615 fused_ordering(882) 00:13:34.615 fused_ordering(883) 00:13:34.615 fused_ordering(884) 00:13:34.615 fused_ordering(885) 00:13:34.615 fused_ordering(886) 00:13:34.615 fused_ordering(887) 00:13:34.615 fused_ordering(888) 00:13:34.615 fused_ordering(889) 00:13:34.615 fused_ordering(890) 00:13:34.615 fused_ordering(891) 00:13:34.615 fused_ordering(892) 00:13:34.615 fused_ordering(893) 00:13:34.615 fused_ordering(894) 00:13:34.615 fused_ordering(895) 00:13:34.615 fused_ordering(896) 00:13:34.615 fused_ordering(897) 00:13:34.615 fused_ordering(898) 00:13:34.615 fused_ordering(899) 00:13:34.615 fused_ordering(900) 00:13:34.615 fused_ordering(901) 00:13:34.615 fused_ordering(902) 00:13:34.615 fused_ordering(903) 00:13:34.615 fused_ordering(904) 00:13:34.615 fused_ordering(905) 00:13:34.615 fused_ordering(906) 00:13:34.615 fused_ordering(907) 00:13:34.615 fused_ordering(908) 00:13:34.615 fused_ordering(909) 00:13:34.615 fused_ordering(910) 00:13:34.615 fused_ordering(911) 00:13:34.615 fused_ordering(912) 00:13:34.615 fused_ordering(913) 00:13:34.615 fused_ordering(914) 00:13:34.615 fused_ordering(915) 00:13:34.615 fused_ordering(916) 00:13:34.615 fused_ordering(917) 00:13:34.615 fused_ordering(918) 00:13:34.615 fused_ordering(919) 00:13:34.615 fused_ordering(920) 00:13:34.615 fused_ordering(921) 00:13:34.615 fused_ordering(922) 00:13:34.615 fused_ordering(923) 00:13:34.615 fused_ordering(924) 00:13:34.615 fused_ordering(925) 00:13:34.615 fused_ordering(926) 00:13:34.615 fused_ordering(927) 00:13:34.615 fused_ordering(928) 00:13:34.615 fused_ordering(929) 00:13:34.615 fused_ordering(930) 00:13:34.615 fused_ordering(931) 00:13:34.615 fused_ordering(932) 00:13:34.615 fused_ordering(933) 00:13:34.615 fused_ordering(934) 00:13:34.615 fused_ordering(935) 00:13:34.615 fused_ordering(936) 00:13:34.615 fused_ordering(937) 00:13:34.615 fused_ordering(938) 00:13:34.615 fused_ordering(939) 00:13:34.615 fused_ordering(940) 00:13:34.615 fused_ordering(941) 00:13:34.615 fused_ordering(942) 00:13:34.615 fused_ordering(943) 00:13:34.615 fused_ordering(944) 00:13:34.615 fused_ordering(945) 00:13:34.615 fused_ordering(946) 00:13:34.615 fused_ordering(947) 00:13:34.615 fused_ordering(948) 00:13:34.615 fused_ordering(949) 00:13:34.615 fused_ordering(950) 00:13:34.615 fused_ordering(951) 00:13:34.615 fused_ordering(952) 00:13:34.615 fused_ordering(953) 00:13:34.615 fused_ordering(954) 00:13:34.615 fused_ordering(955) 00:13:34.615 fused_ordering(956) 00:13:34.615 fused_ordering(957) 00:13:34.615 fused_ordering(958) 00:13:34.615 fused_ordering(959) 00:13:34.615 fused_ordering(960) 00:13:34.615 fused_ordering(961) 00:13:34.615 fused_ordering(962) 00:13:34.615 fused_ordering(963) 00:13:34.615 fused_ordering(964) 00:13:34.615 fused_ordering(965) 00:13:34.615 fused_ordering(966) 00:13:34.615 fused_ordering(967) 00:13:34.615 fused_ordering(968) 00:13:34.615 fused_ordering(969) 00:13:34.615 fused_ordering(970) 00:13:34.615 fused_ordering(971) 00:13:34.615 fused_ordering(972) 00:13:34.615 fused_ordering(973) 00:13:34.615 fused_ordering(974) 00:13:34.615 fused_ordering(975) 00:13:34.615 fused_ordering(976) 00:13:34.615 fused_ordering(977) 00:13:34.615 fused_ordering(978) 00:13:34.615 fused_ordering(979) 00:13:34.615 fused_ordering(980) 00:13:34.615 fused_ordering(981) 00:13:34.615 fused_ordering(982) 00:13:34.615 fused_ordering(983) 00:13:34.615 fused_ordering(984) 00:13:34.615 fused_ordering(985) 00:13:34.615 fused_ordering(986) 00:13:34.615 fused_ordering(987) 00:13:34.615 fused_ordering(988) 00:13:34.615 fused_ordering(989) 00:13:34.615 fused_ordering(990) 00:13:34.615 fused_ordering(991) 00:13:34.615 fused_ordering(992) 00:13:34.615 fused_ordering(993) 00:13:34.615 fused_ordering(994) 00:13:34.615 fused_ordering(995) 00:13:34.615 fused_ordering(996) 00:13:34.615 fused_ordering(997) 00:13:34.615 fused_ordering(998) 00:13:34.615 fused_ordering(999) 00:13:34.615 fused_ordering(1000) 00:13:34.615 fused_ordering(1001) 00:13:34.615 fused_ordering(1002) 00:13:34.615 fused_ordering(1003) 00:13:34.615 fused_ordering(1004) 00:13:34.615 fused_ordering(1005) 00:13:34.615 fused_ordering(1006) 00:13:34.615 fused_ordering(1007) 00:13:34.615 fused_ordering(1008) 00:13:34.615 fused_ordering(1009) 00:13:34.615 fused_ordering(1010) 00:13:34.615 fused_ordering(1011) 00:13:34.615 fused_ordering(1012) 00:13:34.615 fused_ordering(1013) 00:13:34.615 fused_ordering(1014) 00:13:34.615 fused_ordering(1015) 00:13:34.615 fused_ordering(1016) 00:13:34.615 fused_ordering(1017) 00:13:34.615 fused_ordering(1018) 00:13:34.615 fused_ordering(1019) 00:13:34.615 fused_ordering(1020) 00:13:34.615 fused_ordering(1021) 00:13:34.615 fused_ordering(1022) 00:13:34.615 fused_ordering(1023) 00:13:34.615 00:47:46 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:34.615 00:47:46 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:34.615 00:47:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:34.615 00:47:46 -- nvmf/common.sh@116 -- # sync 00:13:34.615 00:47:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:34.615 00:47:46 -- nvmf/common.sh@119 -- # set +e 00:13:34.615 00:47:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:34.615 00:47:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:34.615 rmmod nvme_tcp 00:13:34.615 rmmod nvme_fabrics 00:13:34.615 rmmod nvme_keyring 00:13:34.615 00:47:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:34.615 00:47:46 -- nvmf/common.sh@123 -- # set -e 00:13:34.615 00:47:46 -- nvmf/common.sh@124 -- # return 0 00:13:34.615 00:47:46 -- nvmf/common.sh@477 -- # '[' -n 82272 ']' 00:13:34.615 00:47:46 -- nvmf/common.sh@478 -- # killprocess 82272 00:13:34.615 00:47:46 -- common/autotest_common.sh@936 -- # '[' -z 82272 ']' 00:13:34.615 00:47:46 -- common/autotest_common.sh@940 -- # kill -0 82272 00:13:34.615 00:47:46 -- common/autotest_common.sh@941 -- # uname 00:13:34.615 00:47:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:34.615 00:47:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82272 00:13:34.615 00:47:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:34.615 killing process with pid 82272 00:13:34.615 00:47:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:34.615 00:47:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82272' 00:13:34.615 00:47:47 -- common/autotest_common.sh@955 -- # kill 82272 00:13:34.615 00:47:47 -- common/autotest_common.sh@960 -- # wait 82272 00:13:34.874 00:47:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:34.874 00:47:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:34.874 00:47:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:34.874 00:47:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.874 00:47:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:34.874 00:47:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.874 00:47:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.874 00:47:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.874 00:47:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:34.874 00:13:34.874 real 0m3.931s 00:13:34.874 user 0m4.460s 00:13:34.874 sys 0m1.433s 00:13:34.874 00:47:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:34.874 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:34.874 ************************************ 00:13:34.874 END TEST nvmf_fused_ordering 00:13:34.874 ************************************ 00:13:34.874 00:47:47 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:34.874 00:47:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:34.874 00:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.874 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:34.874 ************************************ 00:13:34.874 START TEST nvmf_delete_subsystem 00:13:34.874 ************************************ 00:13:34.874 00:47:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:34.874 * Looking for test storage... 00:13:34.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.874 00:47:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:34.874 00:47:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:34.874 00:47:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:35.133 00:47:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:35.133 00:47:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:35.133 00:47:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:35.133 00:47:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:35.133 00:47:47 -- scripts/common.sh@335 -- # IFS=.-: 00:13:35.133 00:47:47 -- scripts/common.sh@335 -- # read -ra ver1 00:13:35.133 00:47:47 -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.133 00:47:47 -- scripts/common.sh@336 -- # read -ra ver2 00:13:35.133 00:47:47 -- scripts/common.sh@337 -- # local 'op=<' 00:13:35.133 00:47:47 -- scripts/common.sh@339 -- # ver1_l=2 00:13:35.133 00:47:47 -- scripts/common.sh@340 -- # ver2_l=1 00:13:35.133 00:47:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:35.133 00:47:47 -- scripts/common.sh@343 -- # case "$op" in 00:13:35.133 00:47:47 -- scripts/common.sh@344 -- # : 1 00:13:35.133 00:47:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:35.133 00:47:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.133 00:47:47 -- scripts/common.sh@364 -- # decimal 1 00:13:35.133 00:47:47 -- scripts/common.sh@352 -- # local d=1 00:13:35.133 00:47:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.133 00:47:47 -- scripts/common.sh@354 -- # echo 1 00:13:35.133 00:47:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:35.133 00:47:47 -- scripts/common.sh@365 -- # decimal 2 00:13:35.133 00:47:47 -- scripts/common.sh@352 -- # local d=2 00:13:35.133 00:47:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.133 00:47:47 -- scripts/common.sh@354 -- # echo 2 00:13:35.133 00:47:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:35.133 00:47:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:35.133 00:47:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:35.133 00:47:47 -- scripts/common.sh@367 -- # return 0 00:13:35.133 00:47:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.133 00:47:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:35.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.133 --rc genhtml_branch_coverage=1 00:13:35.133 --rc genhtml_function_coverage=1 00:13:35.133 --rc genhtml_legend=1 00:13:35.133 --rc geninfo_all_blocks=1 00:13:35.133 --rc geninfo_unexecuted_blocks=1 00:13:35.133 00:13:35.134 ' 00:13:35.134 00:47:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:35.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.134 --rc genhtml_branch_coverage=1 00:13:35.134 --rc genhtml_function_coverage=1 00:13:35.134 --rc genhtml_legend=1 00:13:35.134 --rc geninfo_all_blocks=1 00:13:35.134 --rc geninfo_unexecuted_blocks=1 00:13:35.134 00:13:35.134 ' 00:13:35.134 00:47:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:35.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.134 --rc genhtml_branch_coverage=1 00:13:35.134 --rc genhtml_function_coverage=1 00:13:35.134 --rc genhtml_legend=1 00:13:35.134 --rc geninfo_all_blocks=1 00:13:35.134 --rc geninfo_unexecuted_blocks=1 00:13:35.134 00:13:35.134 ' 00:13:35.134 00:47:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:35.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.134 --rc genhtml_branch_coverage=1 00:13:35.134 --rc genhtml_function_coverage=1 00:13:35.134 --rc genhtml_legend=1 00:13:35.134 --rc geninfo_all_blocks=1 00:13:35.134 --rc geninfo_unexecuted_blocks=1 00:13:35.134 00:13:35.134 ' 00:13:35.134 00:47:47 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.134 00:47:47 -- nvmf/common.sh@7 -- # uname -s 00:13:35.134 00:47:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.134 00:47:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.134 00:47:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.134 00:47:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.134 00:47:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.134 00:47:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.134 00:47:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.134 00:47:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.134 00:47:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.134 00:47:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.134 00:47:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:35.134 00:47:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:35.134 00:47:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.134 00:47:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.134 00:47:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.134 00:47:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.134 00:47:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.134 00:47:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.134 00:47:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.134 00:47:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.134 00:47:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.134 00:47:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.134 00:47:47 -- paths/export.sh@5 -- # export PATH 00:13:35.134 00:47:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.134 00:47:47 -- nvmf/common.sh@46 -- # : 0 00:13:35.134 00:47:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:35.134 00:47:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:35.134 00:47:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:35.134 00:47:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.134 00:47:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.134 00:47:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:35.134 00:47:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:35.134 00:47:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:35.134 00:47:47 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:35.134 00:47:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:35.134 00:47:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.134 00:47:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:35.134 00:47:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:35.134 00:47:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:35.134 00:47:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.134 00:47:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.134 00:47:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.134 00:47:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:35.134 00:47:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:35.134 00:47:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:35.134 00:47:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:35.134 00:47:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:35.134 00:47:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:35.134 00:47:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.134 00:47:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.134 00:47:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:35.134 00:47:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:35.134 00:47:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.134 00:47:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.134 00:47:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.134 00:47:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.134 00:47:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.134 00:47:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.134 00:47:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.134 00:47:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.134 00:47:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:35.134 00:47:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:35.134 Cannot find device "nvmf_tgt_br" 00:13:35.134 00:47:47 -- nvmf/common.sh@154 -- # true 00:13:35.134 00:47:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.134 Cannot find device "nvmf_tgt_br2" 00:13:35.134 00:47:47 -- nvmf/common.sh@155 -- # true 00:13:35.134 00:47:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:35.134 00:47:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:35.134 Cannot find device "nvmf_tgt_br" 00:13:35.134 00:47:47 -- nvmf/common.sh@157 -- # true 00:13:35.134 00:47:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:35.134 Cannot find device "nvmf_tgt_br2" 00:13:35.134 00:47:47 -- nvmf/common.sh@158 -- # true 00:13:35.134 00:47:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:35.134 00:47:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:35.134 00:47:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.134 00:47:47 -- nvmf/common.sh@161 -- # true 00:13:35.134 00:47:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.134 00:47:47 -- nvmf/common.sh@162 -- # true 00:13:35.134 00:47:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.134 00:47:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.134 00:47:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.134 00:47:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.134 00:47:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.393 00:47:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.393 00:47:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.393 00:47:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:35.393 00:47:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:35.393 00:47:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:35.393 00:47:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:35.394 00:47:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:35.394 00:47:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:35.394 00:47:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.394 00:47:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.394 00:47:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.394 00:47:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:35.394 00:47:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:35.394 00:47:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.394 00:47:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.394 00:47:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.394 00:47:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.394 00:47:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.394 00:47:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:35.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:35.394 00:13:35.394 --- 10.0.0.2 ping statistics --- 00:13:35.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.394 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:35.394 00:47:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:35.394 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.394 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:35.394 00:13:35.394 --- 10.0.0.3 ping statistics --- 00:13:35.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.394 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:35.394 00:47:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:35.394 00:13:35.394 --- 10.0.0.1 ping statistics --- 00:13:35.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.394 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:35.394 00:47:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.394 00:47:47 -- nvmf/common.sh@421 -- # return 0 00:13:35.394 00:47:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:35.394 00:47:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.394 00:47:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:35.394 00:47:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:35.394 00:47:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.394 00:47:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:35.394 00:47:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:35.394 00:47:47 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:35.394 00:47:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:35.394 00:47:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.394 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:35.394 00:47:47 -- nvmf/common.sh@469 -- # nvmfpid=82543 00:13:35.394 00:47:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:35.394 00:47:47 -- nvmf/common.sh@470 -- # waitforlisten 82543 00:13:35.394 00:47:47 -- common/autotest_common.sh@829 -- # '[' -z 82543 ']' 00:13:35.394 00:47:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.394 00:47:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.394 00:47:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.394 00:47:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.394 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:35.394 [2024-12-03 00:47:47.893609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:35.394 [2024-12-03 00:47:47.893708] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.653 [2024-12-03 00:47:48.038613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:35.653 [2024-12-03 00:47:48.110100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.653 [2024-12-03 00:47:48.110617] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.653 [2024-12-03 00:47:48.110769] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.653 [2024-12-03 00:47:48.110974] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.653 [2024-12-03 00:47:48.111285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.653 [2024-12-03 00:47:48.111302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.590 00:47:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.590 00:47:48 -- common/autotest_common.sh@862 -- # return 0 00:13:36.590 00:47:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.590 00:47:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.590 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 00:47:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.590 00:47:48 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.590 00:47:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.590 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 [2024-12-03 00:47:48.984241] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.590 00:47:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.590 00:47:48 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.590 00:47:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.590 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 00:47:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.590 00:47:48 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.590 00:47:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.590 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 [2024-12-03 00:47:49.000609] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.590 00:47:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.590 00:47:49 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:36.590 00:47:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.590 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 NULL1 00:13:36.590 00:47:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.590 00:47:49 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.590 00:47:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.590 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 Delay0 00:13:36.590 00:47:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.590 00:47:49 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.590 00:47:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.590 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:13:36.590 00:47:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.590 00:47:49 -- target/delete_subsystem.sh@28 -- # perf_pid=82594 00:13:36.590 00:47:49 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:36.590 00:47:49 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:36.849 [2024-12-03 00:47:49.195159] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:38.751 00:47:51 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.751 00:47:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.751 00:47:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 starting I/O failed: -6 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 [2024-12-03 00:47:51.232623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd71000c1d0 is same with the state(5) to be set 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Write completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.751 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 [2024-12-03 00:47:51.233308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd710000c00 is same with the state(5) to be set 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 starting I/O failed: -6 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 [2024-12-03 00:47:51.233794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f870 is same with the state(5) to be set 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Write completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:38.752 Read completed with error (sct=0, sc=8) 00:13:40.129 [2024-12-03 00:47:52.212210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e070 is same with the state(5) to be set 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 [2024-12-03 00:47:52.231501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1770120 is same with the state(5) to be set 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 [2024-12-03 00:47:52.233962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176fbc0 is same with the state(5) to be set 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 [2024-12-03 00:47:52.234367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd71000bf20 is same with the state(5) to be set 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Write completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.129 Read completed with error (sct=0, sc=8) 00:13:40.130 [2024-12-03 00:47:52.235796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd71000c480 is same with the state(5) to be set 00:13:40.130 [2024-12-03 00:47:52.236268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176e070 (9): Bad file descriptor 00:13:40.130 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:40.130 00:47:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.130 00:47:52 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:40.130 00:47:52 -- target/delete_subsystem.sh@35 -- # kill -0 82594 00:13:40.130 00:47:52 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:40.130 Initializing NVMe Controllers 00:13:40.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.130 Controller IO queue size 128, less than required. 00:13:40.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:40.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:40.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:40.130 Initialization complete. Launching workers. 00:13:40.130 ======================================================== 00:13:40.130 Latency(us) 00:13:40.130 Device Information : IOPS MiB/s Average min max 00:13:40.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.66 0.08 968769.35 300.37 2003271.56 00:13:40.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.07 0.08 941435.94 710.36 2004978.80 00:13:40.130 ======================================================== 00:13:40.130 Total : 325.73 0.16 954665.81 300.37 2004978.80 00:13:40.130 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@35 -- # kill -0 82594 00:13:40.389 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82594) - No such process 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@45 -- # NOT wait 82594 00:13:40.389 00:47:52 -- common/autotest_common.sh@650 -- # local es=0 00:13:40.389 00:47:52 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82594 00:13:40.389 00:47:52 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:40.389 00:47:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.389 00:47:52 -- common/autotest_common.sh@642 -- # type -t wait 00:13:40.389 00:47:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.389 00:47:52 -- common/autotest_common.sh@653 -- # wait 82594 00:13:40.389 00:47:52 -- common/autotest_common.sh@653 -- # es=1 00:13:40.389 00:47:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.389 00:47:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.389 00:47:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:40.389 00:47:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.389 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:13:40.389 00:47:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.389 00:47:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.389 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:13:40.389 [2024-12-03 00:47:52.763293] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.389 00:47:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.389 00:47:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.389 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:13:40.389 00:47:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@54 -- # perf_pid=82640 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:40.389 00:47:52 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:40.647 [2024-12-03 00:47:52.929886] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:40.906 00:47:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:40.906 00:47:53 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:40.906 00:47:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.474 00:47:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.474 00:47:53 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:41.474 00:47:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.041 00:47:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.041 00:47:54 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:42.041 00:47:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.300 00:47:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.300 00:47:54 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:42.300 00:47:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.865 00:47:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.865 00:47:55 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:42.865 00:47:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.429 00:47:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.429 00:47:55 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:43.429 00:47:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.688 Initializing NVMe Controllers 00:13:43.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.688 Controller IO queue size 128, less than required. 00:13:43.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:43.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:43.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:43.688 Initialization complete. Launching workers. 00:13:43.688 ======================================================== 00:13:43.688 Latency(us) 00:13:43.688 Device Information : IOPS MiB/s Average min max 00:13:43.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002574.67 1000114.23 1041667.45 00:13:43.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004425.75 1000147.22 1041304.54 00:13:43.688 ======================================================== 00:13:43.688 Total : 256.00 0.12 1003500.21 1000114.23 1041667.45 00:13:43.688 00:13:43.945 00:47:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.945 00:47:56 -- target/delete_subsystem.sh@57 -- # kill -0 82640 00:13:43.945 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82640) - No such process 00:13:43.945 00:47:56 -- target/delete_subsystem.sh@67 -- # wait 82640 00:13:43.946 00:47:56 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:43.946 00:47:56 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:43.946 00:47:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:43.946 00:47:56 -- nvmf/common.sh@116 -- # sync 00:13:43.946 00:47:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:43.946 00:47:56 -- nvmf/common.sh@119 -- # set +e 00:13:43.946 00:47:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:43.946 00:47:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:43.946 rmmod nvme_tcp 00:13:43.946 rmmod nvme_fabrics 00:13:43.946 rmmod nvme_keyring 00:13:43.946 00:47:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:43.946 00:47:56 -- nvmf/common.sh@123 -- # set -e 00:13:43.946 00:47:56 -- nvmf/common.sh@124 -- # return 0 00:13:43.946 00:47:56 -- nvmf/common.sh@477 -- # '[' -n 82543 ']' 00:13:43.946 00:47:56 -- nvmf/common.sh@478 -- # killprocess 82543 00:13:43.946 00:47:56 -- common/autotest_common.sh@936 -- # '[' -z 82543 ']' 00:13:43.946 00:47:56 -- common/autotest_common.sh@940 -- # kill -0 82543 00:13:43.946 00:47:56 -- common/autotest_common.sh@941 -- # uname 00:13:43.946 00:47:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.946 00:47:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82543 00:13:43.946 00:47:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:43.946 00:47:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:43.946 00:47:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82543' 00:13:43.946 killing process with pid 82543 00:13:43.946 00:47:56 -- common/autotest_common.sh@955 -- # kill 82543 00:13:43.946 00:47:56 -- common/autotest_common.sh@960 -- # wait 82543 00:13:44.204 00:47:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.204 00:47:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.204 00:47:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.204 00:47:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.204 00:47:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.204 00:47:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.204 00:47:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.204 00:47:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.204 00:47:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:44.463 00:13:44.463 real 0m9.439s 00:13:44.463 user 0m28.994s 00:13:44.463 sys 0m1.526s 00:13:44.463 00:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:44.463 ************************************ 00:13:44.463 END TEST nvmf_delete_subsystem 00:13:44.463 00:47:56 -- common/autotest_common.sh@10 -- # set +x 00:13:44.463 ************************************ 00:13:44.463 00:47:56 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:44.463 00:47:56 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:44.463 00:47:56 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.463 00:47:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:44.463 00:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.463 00:47:56 -- common/autotest_common.sh@10 -- # set +x 00:13:44.463 ************************************ 00:13:44.463 START TEST nvmf_host_management 00:13:44.463 ************************************ 00:13:44.463 00:47:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.463 * Looking for test storage... 00:13:44.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.463 00:47:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:44.463 00:47:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:44.463 00:47:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:44.463 00:47:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:44.463 00:47:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:44.463 00:47:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:44.463 00:47:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:44.463 00:47:56 -- scripts/common.sh@335 -- # IFS=.-: 00:13:44.463 00:47:56 -- scripts/common.sh@335 -- # read -ra ver1 00:13:44.463 00:47:56 -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.463 00:47:56 -- scripts/common.sh@336 -- # read -ra ver2 00:13:44.463 00:47:56 -- scripts/common.sh@337 -- # local 'op=<' 00:13:44.463 00:47:56 -- scripts/common.sh@339 -- # ver1_l=2 00:13:44.463 00:47:56 -- scripts/common.sh@340 -- # ver2_l=1 00:13:44.463 00:47:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:44.463 00:47:56 -- scripts/common.sh@343 -- # case "$op" in 00:13:44.463 00:47:56 -- scripts/common.sh@344 -- # : 1 00:13:44.463 00:47:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:44.463 00:47:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.463 00:47:56 -- scripts/common.sh@364 -- # decimal 1 00:13:44.463 00:47:56 -- scripts/common.sh@352 -- # local d=1 00:13:44.463 00:47:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.463 00:47:56 -- scripts/common.sh@354 -- # echo 1 00:13:44.463 00:47:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:44.463 00:47:56 -- scripts/common.sh@365 -- # decimal 2 00:13:44.463 00:47:56 -- scripts/common.sh@352 -- # local d=2 00:13:44.463 00:47:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.463 00:47:56 -- scripts/common.sh@354 -- # echo 2 00:13:44.463 00:47:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:44.463 00:47:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:44.463 00:47:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:44.463 00:47:56 -- scripts/common.sh@367 -- # return 0 00:13:44.463 00:47:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.464 00:47:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:44.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.464 --rc genhtml_branch_coverage=1 00:13:44.464 --rc genhtml_function_coverage=1 00:13:44.464 --rc genhtml_legend=1 00:13:44.464 --rc geninfo_all_blocks=1 00:13:44.464 --rc geninfo_unexecuted_blocks=1 00:13:44.464 00:13:44.464 ' 00:13:44.464 00:47:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:44.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.464 --rc genhtml_branch_coverage=1 00:13:44.464 --rc genhtml_function_coverage=1 00:13:44.464 --rc genhtml_legend=1 00:13:44.464 --rc geninfo_all_blocks=1 00:13:44.464 --rc geninfo_unexecuted_blocks=1 00:13:44.464 00:13:44.464 ' 00:13:44.464 00:47:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:44.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.464 --rc genhtml_branch_coverage=1 00:13:44.464 --rc genhtml_function_coverage=1 00:13:44.464 --rc genhtml_legend=1 00:13:44.464 --rc geninfo_all_blocks=1 00:13:44.464 --rc geninfo_unexecuted_blocks=1 00:13:44.464 00:13:44.464 ' 00:13:44.464 00:47:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:44.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.464 --rc genhtml_branch_coverage=1 00:13:44.464 --rc genhtml_function_coverage=1 00:13:44.464 --rc genhtml_legend=1 00:13:44.464 --rc geninfo_all_blocks=1 00:13:44.464 --rc geninfo_unexecuted_blocks=1 00:13:44.464 00:13:44.464 ' 00:13:44.464 00:47:56 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.464 00:47:56 -- nvmf/common.sh@7 -- # uname -s 00:13:44.464 00:47:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.464 00:47:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.464 00:47:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.464 00:47:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.464 00:47:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.464 00:47:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.464 00:47:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.464 00:47:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.464 00:47:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.464 00:47:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.464 00:47:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:44.464 00:47:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:44.464 00:47:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.464 00:47:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.464 00:47:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.464 00:47:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.723 00:47:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.723 00:47:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.723 00:47:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.723 00:47:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.723 00:47:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.723 00:47:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.723 00:47:56 -- paths/export.sh@5 -- # export PATH 00:13:44.723 00:47:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.723 00:47:56 -- nvmf/common.sh@46 -- # : 0 00:13:44.723 00:47:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:44.723 00:47:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:44.723 00:47:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:44.723 00:47:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.723 00:47:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.723 00:47:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:44.723 00:47:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:44.723 00:47:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:44.723 00:47:56 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.723 00:47:56 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.723 00:47:56 -- target/host_management.sh@104 -- # nvmftestinit 00:13:44.723 00:47:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:44.723 00:47:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.723 00:47:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:44.723 00:47:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:44.723 00:47:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:44.723 00:47:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.723 00:47:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.723 00:47:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.723 00:47:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:44.723 00:47:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:44.723 00:47:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:44.723 00:47:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:44.723 00:47:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:44.723 00:47:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:44.723 00:47:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.723 00:47:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.723 00:47:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.723 00:47:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:44.723 00:47:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.723 00:47:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.723 00:47:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.723 00:47:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.723 00:47:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.723 00:47:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.723 00:47:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.723 00:47:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.723 00:47:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:44.723 00:47:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:44.723 Cannot find device "nvmf_tgt_br" 00:13:44.723 00:47:57 -- nvmf/common.sh@154 -- # true 00:13:44.723 00:47:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.723 Cannot find device "nvmf_tgt_br2" 00:13:44.723 00:47:57 -- nvmf/common.sh@155 -- # true 00:13:44.723 00:47:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:44.723 00:47:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:44.723 Cannot find device "nvmf_tgt_br" 00:13:44.723 00:47:57 -- nvmf/common.sh@157 -- # true 00:13:44.723 00:47:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:44.723 Cannot find device "nvmf_tgt_br2" 00:13:44.723 00:47:57 -- nvmf/common.sh@158 -- # true 00:13:44.723 00:47:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:44.723 00:47:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:44.723 00:47:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.723 00:47:57 -- nvmf/common.sh@161 -- # true 00:13:44.723 00:47:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.723 00:47:57 -- nvmf/common.sh@162 -- # true 00:13:44.723 00:47:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.723 00:47:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.723 00:47:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.723 00:47:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.723 00:47:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.723 00:47:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.723 00:47:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.723 00:47:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.982 00:47:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.982 00:47:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:44.982 00:47:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:44.982 00:47:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:44.982 00:47:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:44.982 00:47:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.982 00:47:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.982 00:47:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.982 00:47:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:44.982 00:47:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:44.982 00:47:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.982 00:47:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.982 00:47:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.982 00:47:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.982 00:47:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.982 00:47:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:44.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:44.982 00:13:44.982 --- 10.0.0.2 ping statistics --- 00:13:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.982 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:44.982 00:47:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:44.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:13:44.982 00:13:44.982 --- 10.0.0.3 ping statistics --- 00:13:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.982 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:44.982 00:47:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:44.982 00:13:44.982 --- 10.0.0.1 ping statistics --- 00:13:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.982 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:44.982 00:47:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.982 00:47:57 -- nvmf/common.sh@421 -- # return 0 00:13:44.982 00:47:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.982 00:47:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.982 00:47:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.982 00:47:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.982 00:47:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.982 00:47:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.982 00:47:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.983 00:47:57 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:44.983 00:47:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.983 00:47:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.983 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:13:44.983 ************************************ 00:13:44.983 START TEST nvmf_host_management 00:13:44.983 ************************************ 00:13:44.983 00:47:57 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:13:44.983 00:47:57 -- target/host_management.sh@69 -- # starttarget 00:13:44.983 00:47:57 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:44.983 00:47:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:44.983 00:47:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.983 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:13:44.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.983 00:47:57 -- nvmf/common.sh@469 -- # nvmfpid=82879 00:13:44.983 00:47:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:44.983 00:47:57 -- nvmf/common.sh@470 -- # waitforlisten 82879 00:13:44.983 00:47:57 -- common/autotest_common.sh@829 -- # '[' -z 82879 ']' 00:13:44.983 00:47:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.983 00:47:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.983 00:47:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.983 00:47:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.983 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:13:44.983 [2024-12-03 00:47:57.439792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:44.983 [2024-12-03 00:47:57.439879] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.241 [2024-12-03 00:47:57.584968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.241 [2024-12-03 00:47:57.693590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:45.241 [2024-12-03 00:47:57.693774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.241 [2024-12-03 00:47:57.693793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.241 [2024-12-03 00:47:57.693804] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.241 [2024-12-03 00:47:57.694170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.241 [2024-12-03 00:47:57.694527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.241 [2024-12-03 00:47:57.694670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:45.241 [2024-12-03 00:47:57.694756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.177 00:47:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.177 00:47:58 -- common/autotest_common.sh@862 -- # return 0 00:13:46.177 00:47:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:46.177 00:47:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.177 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 00:47:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.177 00:47:58 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.177 00:47:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.177 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 [2024-12-03 00:47:58.470017] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.177 00:47:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.177 00:47:58 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:46.177 00:47:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.177 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 00:47:58 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:46.177 00:47:58 -- target/host_management.sh@23 -- # cat 00:13:46.177 00:47:58 -- target/host_management.sh@30 -- # rpc_cmd 00:13:46.177 00:47:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.177 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 Malloc0 00:13:46.177 [2024-12-03 00:47:58.557976] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.177 00:47:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.177 00:47:58 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:46.177 00:47:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.177 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 00:47:58 -- target/host_management.sh@73 -- # perfpid=82951 00:13:46.177 00:47:58 -- target/host_management.sh@74 -- # waitforlisten 82951 /var/tmp/bdevperf.sock 00:13:46.177 00:47:58 -- common/autotest_common.sh@829 -- # '[' -z 82951 ']' 00:13:46.177 00:47:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.177 00:47:58 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:46.177 00:47:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.177 00:47:58 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:46.177 00:47:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.177 00:47:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.177 00:47:58 -- nvmf/common.sh@520 -- # config=() 00:13:46.177 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 00:47:58 -- nvmf/common.sh@520 -- # local subsystem config 00:13:46.177 00:47:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:46.177 00:47:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:46.177 { 00:13:46.177 "params": { 00:13:46.177 "name": "Nvme$subsystem", 00:13:46.177 "trtype": "$TEST_TRANSPORT", 00:13:46.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.177 "adrfam": "ipv4", 00:13:46.177 "trsvcid": "$NVMF_PORT", 00:13:46.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.177 "hdgst": ${hdgst:-false}, 00:13:46.177 "ddgst": ${ddgst:-false} 00:13:46.177 }, 00:13:46.177 "method": "bdev_nvme_attach_controller" 00:13:46.177 } 00:13:46.177 EOF 00:13:46.177 )") 00:13:46.177 00:47:58 -- nvmf/common.sh@542 -- # cat 00:13:46.177 00:47:58 -- nvmf/common.sh@544 -- # jq . 00:13:46.177 00:47:58 -- nvmf/common.sh@545 -- # IFS=, 00:13:46.177 00:47:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:46.177 "params": { 00:13:46.177 "name": "Nvme0", 00:13:46.177 "trtype": "tcp", 00:13:46.177 "traddr": "10.0.0.2", 00:13:46.177 "adrfam": "ipv4", 00:13:46.177 "trsvcid": "4420", 00:13:46.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:46.177 "hdgst": false, 00:13:46.177 "ddgst": false 00:13:46.177 }, 00:13:46.177 "method": "bdev_nvme_attach_controller" 00:13:46.177 }' 00:13:46.177 [2024-12-03 00:47:58.672255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:46.177 [2024-12-03 00:47:58.672918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82951 ] 00:13:46.436 [2024-12-03 00:47:58.818557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.436 [2024-12-03 00:47:58.904405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.695 Running I/O for 10 seconds... 00:13:47.264 00:47:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.264 00:47:59 -- common/autotest_common.sh@862 -- # return 0 00:13:47.264 00:47:59 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:47.264 00:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.264 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:13:47.264 00:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.264 00:47:59 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.264 00:47:59 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:47.264 00:47:59 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:47.264 00:47:59 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:47.264 00:47:59 -- target/host_management.sh@52 -- # local ret=1 00:13:47.264 00:47:59 -- target/host_management.sh@53 -- # local i 00:13:47.264 00:47:59 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:47.264 00:47:59 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:47.264 00:47:59 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:47.264 00:47:59 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:47.264 00:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.264 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:13:47.264 00:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.264 00:47:59 -- target/host_management.sh@55 -- # read_io_count=1907 00:13:47.264 00:47:59 -- target/host_management.sh@58 -- # '[' 1907 -ge 100 ']' 00:13:47.264 00:47:59 -- target/host_management.sh@59 -- # ret=0 00:13:47.264 00:47:59 -- target/host_management.sh@60 -- # break 00:13:47.264 00:47:59 -- target/host_management.sh@64 -- # return 0 00:13:47.264 00:47:59 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:47.265 00:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.265 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:13:47.265 [2024-12-03 00:47:59.684268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.684786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57e70 is same with the state(5) to be set 00:13:47.265 [2024-12-03 00:47:59.685940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.685997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.265 [2024-12-03 00:47:59.686472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.265 [2024-12-03 00:47:59.686484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.686983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.686992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.266 [2024-12-03 00:47:59.687327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.266 [2024-12-03 00:47:59.687336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.267 [2024-12-03 00:47:59.687347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.267 [2024-12-03 00:47:59.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.267 [2024-12-03 00:47:59.687367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.267 [2024-12-03 00:47:59.687376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.267 [2024-12-03 00:47:59.687390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.267 [2024-12-03 00:47:59.687399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.267 [2024-12-03 00:47:59.687409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.267 [2024-12-03 00:47:59.687419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.267 [2024-12-03 00:47:59.687474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.267 [2024-12-03 00:47:59.687485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.267 [2024-12-03 00:47:59.687597] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa9adc0 was disconnected and freed. reset controller. 00:13:47.267 [2024-12-03 00:47:59.688630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:47.267 task offset: 2816 on job bdev=Nvme0n1 fails 00:13:47.267 00:13:47.267 Latency(us) 00:13:47.267 [2024-12-03T00:47:59.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.267 [2024-12-03T00:47:59.782Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:47.267 [2024-12-03T00:47:59.782Z] Job: Nvme0n1 ended in about 0.58 seconds with error 00:13:47.267 Verification LBA range: start 0x0 length 0x400 00:13:47.267 Nvme0n1 : 0.58 3579.23 223.70 111.25 0.00 16994.83 2025.66 29908.25 00:13:47.267 [2024-12-03T00:47:59.782Z] =================================================================================================================== 00:13:47.267 [2024-12-03T00:47:59.782Z] Total : 3579.23 223.70 111.25 0.00 16994.83 2025.66 29908.25 00:13:47.267 [2024-12-03 00:47:59.690471] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:47.267 [2024-12-03 00:47:59.690500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6a70 (9): Bad file descriptor 00:13:47.267 00:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.267 00:47:59 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:47.267 00:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.267 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:13:47.267 00:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.267 00:47:59 -- target/host_management.sh@87 -- # sleep 1 00:13:47.267 [2024-12-03 00:47:59.700856] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:48.202 00:48:00 -- target/host_management.sh@91 -- # kill -9 82951 00:13:48.202 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82951) - No such process 00:13:48.202 00:48:00 -- target/host_management.sh@91 -- # true 00:13:48.202 00:48:00 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:48.202 00:48:00 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:48.202 00:48:00 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:48.202 00:48:00 -- nvmf/common.sh@520 -- # config=() 00:13:48.202 00:48:00 -- nvmf/common.sh@520 -- # local subsystem config 00:13:48.202 00:48:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:48.202 00:48:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:48.202 { 00:13:48.202 "params": { 00:13:48.202 "name": "Nvme$subsystem", 00:13:48.202 "trtype": "$TEST_TRANSPORT", 00:13:48.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:48.202 "adrfam": "ipv4", 00:13:48.202 "trsvcid": "$NVMF_PORT", 00:13:48.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:48.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:48.202 "hdgst": ${hdgst:-false}, 00:13:48.202 "ddgst": ${ddgst:-false} 00:13:48.202 }, 00:13:48.202 "method": "bdev_nvme_attach_controller" 00:13:48.202 } 00:13:48.202 EOF 00:13:48.202 )") 00:13:48.202 00:48:00 -- nvmf/common.sh@542 -- # cat 00:13:48.462 00:48:00 -- nvmf/common.sh@544 -- # jq . 00:13:48.462 00:48:00 -- nvmf/common.sh@545 -- # IFS=, 00:13:48.462 00:48:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:48.462 "params": { 00:13:48.462 "name": "Nvme0", 00:13:48.462 "trtype": "tcp", 00:13:48.462 "traddr": "10.0.0.2", 00:13:48.462 "adrfam": "ipv4", 00:13:48.462 "trsvcid": "4420", 00:13:48.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:48.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:48.462 "hdgst": false, 00:13:48.462 "ddgst": false 00:13:48.462 }, 00:13:48.462 "method": "bdev_nvme_attach_controller" 00:13:48.462 }' 00:13:48.462 [2024-12-03 00:48:00.764194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.462 [2024-12-03 00:48:00.764487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83002 ] 00:13:48.462 [2024-12-03 00:48:00.908142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.721 [2024-12-03 00:48:00.982264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.721 Running I/O for 1 seconds... 00:13:49.657 00:13:49.657 Latency(us) 00:13:49.657 [2024-12-03T00:48:02.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.657 [2024-12-03T00:48:02.172Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:49.657 Verification LBA range: start 0x0 length 0x400 00:13:49.657 Nvme0n1 : 1.01 3668.01 229.25 0.00 0.00 17166.93 1020.28 22401.40 00:13:49.657 [2024-12-03T00:48:02.172Z] =================================================================================================================== 00:13:49.657 [2024-12-03T00:48:02.172Z] Total : 3668.01 229.25 0.00 0.00 17166.93 1020.28 22401.40 00:13:49.916 00:48:02 -- target/host_management.sh@101 -- # stoptarget 00:13:49.916 00:48:02 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:49.916 00:48:02 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:49.916 00:48:02 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:49.916 00:48:02 -- target/host_management.sh@40 -- # nvmftestfini 00:13:49.916 00:48:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:49.916 00:48:02 -- nvmf/common.sh@116 -- # sync 00:13:49.916 00:48:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:49.916 00:48:02 -- nvmf/common.sh@119 -- # set +e 00:13:49.916 00:48:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:49.916 00:48:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:49.916 rmmod nvme_tcp 00:13:50.176 rmmod nvme_fabrics 00:13:50.176 rmmod nvme_keyring 00:13:50.176 00:48:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.176 00:48:02 -- nvmf/common.sh@123 -- # set -e 00:13:50.176 00:48:02 -- nvmf/common.sh@124 -- # return 0 00:13:50.176 00:48:02 -- nvmf/common.sh@477 -- # '[' -n 82879 ']' 00:13:50.176 00:48:02 -- nvmf/common.sh@478 -- # killprocess 82879 00:13:50.176 00:48:02 -- common/autotest_common.sh@936 -- # '[' -z 82879 ']' 00:13:50.176 00:48:02 -- common/autotest_common.sh@940 -- # kill -0 82879 00:13:50.176 00:48:02 -- common/autotest_common.sh@941 -- # uname 00:13:50.176 00:48:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:50.176 00:48:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82879 00:13:50.176 killing process with pid 82879 00:13:50.176 00:48:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:50.176 00:48:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:50.176 00:48:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82879' 00:13:50.176 00:48:02 -- common/autotest_common.sh@955 -- # kill 82879 00:13:50.176 00:48:02 -- common/autotest_common.sh@960 -- # wait 82879 00:13:50.436 [2024-12-03 00:48:02.786090] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:50.436 00:48:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.436 00:48:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.436 00:48:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.436 00:48:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.436 00:48:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.436 00:48:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.436 00:48:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.436 00:48:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.436 00:48:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:50.436 00:13:50.436 real 0m5.468s 00:13:50.436 user 0m22.547s 00:13:50.436 sys 0m1.442s 00:13:50.436 00:48:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:50.436 ************************************ 00:13:50.436 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:13:50.436 END TEST nvmf_host_management 00:13:50.436 ************************************ 00:13:50.436 00:48:02 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:50.436 00:13:50.436 real 0m6.116s 00:13:50.436 user 0m22.759s 00:13:50.436 sys 0m1.716s 00:13:50.436 ************************************ 00:13:50.436 END TEST nvmf_host_management 00:13:50.436 ************************************ 00:13:50.436 00:48:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:50.436 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:13:50.436 00:48:02 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.436 00:48:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:50.436 00:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.436 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:13:50.436 ************************************ 00:13:50.436 START TEST nvmf_lvol 00:13:50.436 ************************************ 00:13:50.436 00:48:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.695 * Looking for test storage... 00:13:50.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.695 00:48:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:50.695 00:48:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:50.695 00:48:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:50.695 00:48:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:50.695 00:48:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:50.695 00:48:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:50.695 00:48:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:50.695 00:48:03 -- scripts/common.sh@335 -- # IFS=.-: 00:13:50.695 00:48:03 -- scripts/common.sh@335 -- # read -ra ver1 00:13:50.695 00:48:03 -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.695 00:48:03 -- scripts/common.sh@336 -- # read -ra ver2 00:13:50.695 00:48:03 -- scripts/common.sh@337 -- # local 'op=<' 00:13:50.695 00:48:03 -- scripts/common.sh@339 -- # ver1_l=2 00:13:50.695 00:48:03 -- scripts/common.sh@340 -- # ver2_l=1 00:13:50.695 00:48:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:50.695 00:48:03 -- scripts/common.sh@343 -- # case "$op" in 00:13:50.695 00:48:03 -- scripts/common.sh@344 -- # : 1 00:13:50.695 00:48:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:50.695 00:48:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.695 00:48:03 -- scripts/common.sh@364 -- # decimal 1 00:13:50.695 00:48:03 -- scripts/common.sh@352 -- # local d=1 00:13:50.695 00:48:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.695 00:48:03 -- scripts/common.sh@354 -- # echo 1 00:13:50.695 00:48:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:50.695 00:48:03 -- scripts/common.sh@365 -- # decimal 2 00:13:50.695 00:48:03 -- scripts/common.sh@352 -- # local d=2 00:13:50.695 00:48:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.695 00:48:03 -- scripts/common.sh@354 -- # echo 2 00:13:50.695 00:48:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:50.695 00:48:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:50.695 00:48:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:50.695 00:48:03 -- scripts/common.sh@367 -- # return 0 00:13:50.695 00:48:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.695 00:48:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.695 --rc genhtml_branch_coverage=1 00:13:50.695 --rc genhtml_function_coverage=1 00:13:50.695 --rc genhtml_legend=1 00:13:50.695 --rc geninfo_all_blocks=1 00:13:50.695 --rc geninfo_unexecuted_blocks=1 00:13:50.695 00:13:50.695 ' 00:13:50.695 00:48:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.695 --rc genhtml_branch_coverage=1 00:13:50.695 --rc genhtml_function_coverage=1 00:13:50.695 --rc genhtml_legend=1 00:13:50.695 --rc geninfo_all_blocks=1 00:13:50.695 --rc geninfo_unexecuted_blocks=1 00:13:50.695 00:13:50.695 ' 00:13:50.695 00:48:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.695 --rc genhtml_branch_coverage=1 00:13:50.695 --rc genhtml_function_coverage=1 00:13:50.695 --rc genhtml_legend=1 00:13:50.695 --rc geninfo_all_blocks=1 00:13:50.695 --rc geninfo_unexecuted_blocks=1 00:13:50.695 00:13:50.695 ' 00:13:50.695 00:48:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.696 --rc genhtml_branch_coverage=1 00:13:50.696 --rc genhtml_function_coverage=1 00:13:50.696 --rc genhtml_legend=1 00:13:50.696 --rc geninfo_all_blocks=1 00:13:50.696 --rc geninfo_unexecuted_blocks=1 00:13:50.696 00:13:50.696 ' 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.696 00:48:03 -- nvmf/common.sh@7 -- # uname -s 00:13:50.696 00:48:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.696 00:48:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.696 00:48:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.696 00:48:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.696 00:48:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.696 00:48:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.696 00:48:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.696 00:48:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.696 00:48:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.696 00:48:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.696 00:48:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:50.696 00:48:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:13:50.696 00:48:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.696 00:48:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.696 00:48:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.696 00:48:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.696 00:48:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.696 00:48:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.696 00:48:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.696 00:48:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.696 00:48:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.696 00:48:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.696 00:48:03 -- paths/export.sh@5 -- # export PATH 00:13:50.696 00:48:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.696 00:48:03 -- nvmf/common.sh@46 -- # : 0 00:13:50.696 00:48:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.696 00:48:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.696 00:48:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.696 00:48:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.696 00:48:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.696 00:48:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.696 00:48:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.696 00:48:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.696 00:48:03 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:50.696 00:48:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.696 00:48:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.696 00:48:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.696 00:48:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.696 00:48:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.696 00:48:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.696 00:48:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.696 00:48:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.696 00:48:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:50.696 00:48:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:50.696 00:48:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:50.696 00:48:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:50.696 00:48:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:50.696 00:48:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:50.696 00:48:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.696 00:48:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.696 00:48:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.696 00:48:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:50.696 00:48:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.696 00:48:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.696 00:48:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.696 00:48:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.696 00:48:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.696 00:48:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.696 00:48:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.696 00:48:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.696 00:48:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:50.696 00:48:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:50.696 Cannot find device "nvmf_tgt_br" 00:13:50.696 00:48:03 -- nvmf/common.sh@154 -- # true 00:13:50.696 00:48:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.696 Cannot find device "nvmf_tgt_br2" 00:13:50.696 00:48:03 -- nvmf/common.sh@155 -- # true 00:13:50.696 00:48:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:50.696 00:48:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:50.696 Cannot find device "nvmf_tgt_br" 00:13:50.696 00:48:03 -- nvmf/common.sh@157 -- # true 00:13:50.696 00:48:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:50.696 Cannot find device "nvmf_tgt_br2" 00:13:50.696 00:48:03 -- nvmf/common.sh@158 -- # true 00:13:50.696 00:48:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:50.955 00:48:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:50.955 00:48:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.955 00:48:03 -- nvmf/common.sh@161 -- # true 00:13:50.955 00:48:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.955 00:48:03 -- nvmf/common.sh@162 -- # true 00:13:50.955 00:48:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.955 00:48:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.955 00:48:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.955 00:48:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.955 00:48:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.955 00:48:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.955 00:48:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.955 00:48:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:50.955 00:48:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:50.955 00:48:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:50.955 00:48:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:50.955 00:48:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:50.955 00:48:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:50.955 00:48:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.955 00:48:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.955 00:48:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.955 00:48:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:50.955 00:48:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:50.955 00:48:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.955 00:48:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.955 00:48:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.955 00:48:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.955 00:48:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.955 00:48:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:50.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:50.955 00:13:50.955 --- 10.0.0.2 ping statistics --- 00:13:50.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.955 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:50.955 00:48:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:50.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:13:50.955 00:13:50.955 --- 10.0.0.3 ping statistics --- 00:13:50.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.955 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:50.955 00:48:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:13:50.955 00:13:50.955 --- 10.0.0.1 ping statistics --- 00:13:50.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.955 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:50.955 00:48:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.955 00:48:03 -- nvmf/common.sh@421 -- # return 0 00:13:50.955 00:48:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:50.955 00:48:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.955 00:48:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:50.955 00:48:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:50.955 00:48:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.955 00:48:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:50.955 00:48:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:50.955 00:48:03 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:50.955 00:48:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:50.955 00:48:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:50.955 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:13:50.955 00:48:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:50.955 00:48:03 -- nvmf/common.sh@469 -- # nvmfpid=83245 00:13:50.955 00:48:03 -- nvmf/common.sh@470 -- # waitforlisten 83245 00:13:50.955 00:48:03 -- common/autotest_common.sh@829 -- # '[' -z 83245 ']' 00:13:50.955 00:48:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.955 00:48:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.955 00:48:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.955 00:48:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.955 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:13:51.214 [2024-12-03 00:48:03.517547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:51.214 [2024-12-03 00:48:03.517841] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.214 [2024-12-03 00:48:03.662378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.474 [2024-12-03 00:48:03.737232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.474 [2024-12-03 00:48:03.737662] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.474 [2024-12-03 00:48:03.737786] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.474 [2024-12-03 00:48:03.738210] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.474 [2024-12-03 00:48:03.738525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.474 [2024-12-03 00:48:03.738663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.474 [2024-12-03 00:48:03.738676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.041 00:48:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.041 00:48:04 -- common/autotest_common.sh@862 -- # return 0 00:13:52.041 00:48:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.041 00:48:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.041 00:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:52.300 00:48:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.300 00:48:04 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.558 [2024-12-03 00:48:04.849711] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.558 00:48:04 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.816 00:48:05 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:52.816 00:48:05 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.073 00:48:05 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:53.073 00:48:05 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:53.331 00:48:05 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:53.589 00:48:05 -- target/nvmf_lvol.sh@29 -- # lvs=42cd36cf-b5d9-4f1f-8e01-cea5e2ab0011 00:13:53.589 00:48:05 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 42cd36cf-b5d9-4f1f-8e01-cea5e2ab0011 lvol 20 00:13:53.846 00:48:06 -- target/nvmf_lvol.sh@32 -- # lvol=f7572aa0-c510-4113-bdb8-dbc73931fe34 00:13:53.846 00:48:06 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.104 00:48:06 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7572aa0-c510-4113-bdb8-dbc73931fe34 00:13:54.362 00:48:06 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:54.621 [2024-12-03 00:48:06.914926] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.621 00:48:06 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.880 00:48:07 -- target/nvmf_lvol.sh@42 -- # perf_pid=83393 00:13:54.880 00:48:07 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:54.880 00:48:07 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:55.818 00:48:08 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f7572aa0-c510-4113-bdb8-dbc73931fe34 MY_SNAPSHOT 00:13:56.077 00:48:08 -- target/nvmf_lvol.sh@47 -- # snapshot=61f1e55e-4c2c-4f89-8d7c-d081dfe2e4b0 00:13:56.077 00:48:08 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f7572aa0-c510-4113-bdb8-dbc73931fe34 30 00:13:56.335 00:48:08 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 61f1e55e-4c2c-4f89-8d7c-d081dfe2e4b0 MY_CLONE 00:13:56.594 00:48:09 -- target/nvmf_lvol.sh@49 -- # clone=61eb4e6d-cd78-48a0-be35-cfee1b8cfd8c 00:13:56.594 00:48:09 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 61eb4e6d-cd78-48a0-be35-cfee1b8cfd8c 00:13:57.161 00:48:09 -- target/nvmf_lvol.sh@53 -- # wait 83393 00:14:05.295 Initializing NVMe Controllers 00:14:05.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:05.295 Controller IO queue size 128, less than required. 00:14:05.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:05.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:05.295 Initialization complete. Launching workers. 00:14:05.295 ======================================================== 00:14:05.295 Latency(us) 00:14:05.295 Device Information : IOPS MiB/s Average min max 00:14:05.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11494.07 44.90 11140.17 1957.85 68159.28 00:14:05.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11953.46 46.69 10707.33 3001.88 71498.74 00:14:05.295 ======================================================== 00:14:05.295 Total : 23447.53 91.59 10919.51 1957.85 71498.74 00:14:05.295 00:14:05.295 00:48:17 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:05.295 00:48:17 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f7572aa0-c510-4113-bdb8-dbc73931fe34 00:14:05.554 00:48:17 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42cd36cf-b5d9-4f1f-8e01-cea5e2ab0011 00:14:05.813 00:48:18 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:05.813 00:48:18 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:05.813 00:48:18 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:05.813 00:48:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.813 00:48:18 -- nvmf/common.sh@116 -- # sync 00:14:05.813 00:48:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.813 00:48:18 -- nvmf/common.sh@119 -- # set +e 00:14:05.813 00:48:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.813 00:48:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.813 rmmod nvme_tcp 00:14:05.813 rmmod nvme_fabrics 00:14:05.813 rmmod nvme_keyring 00:14:05.813 00:48:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.813 00:48:18 -- nvmf/common.sh@123 -- # set -e 00:14:05.813 00:48:18 -- nvmf/common.sh@124 -- # return 0 00:14:05.813 00:48:18 -- nvmf/common.sh@477 -- # '[' -n 83245 ']' 00:14:05.813 00:48:18 -- nvmf/common.sh@478 -- # killprocess 83245 00:14:05.813 00:48:18 -- common/autotest_common.sh@936 -- # '[' -z 83245 ']' 00:14:05.813 00:48:18 -- common/autotest_common.sh@940 -- # kill -0 83245 00:14:05.813 00:48:18 -- common/autotest_common.sh@941 -- # uname 00:14:05.813 00:48:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.813 00:48:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83245 00:14:06.072 killing process with pid 83245 00:14:06.072 00:48:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:06.072 00:48:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:06.072 00:48:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83245' 00:14:06.072 00:48:18 -- common/autotest_common.sh@955 -- # kill 83245 00:14:06.072 00:48:18 -- common/autotest_common.sh@960 -- # wait 83245 00:14:06.331 00:48:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:06.331 00:48:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:06.331 00:48:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:06.331 00:48:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.331 00:48:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.331 00:48:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.331 00:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.331 00:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.331 00:48:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:06.331 00:14:06.331 real 0m15.738s 00:14:06.331 user 1m5.923s 00:14:06.331 sys 0m3.499s 00:14:06.331 00:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.331 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:06.331 ************************************ 00:14:06.331 END TEST nvmf_lvol 00:14:06.331 ************************************ 00:14:06.331 00:48:18 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:06.331 00:48:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.331 00:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.331 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:06.331 ************************************ 00:14:06.331 START TEST nvmf_lvs_grow 00:14:06.331 ************************************ 00:14:06.331 00:48:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:06.331 * Looking for test storage... 00:14:06.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:06.331 00:48:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:06.331 00:48:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:06.331 00:48:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:06.624 00:48:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:06.624 00:48:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:06.624 00:48:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:06.624 00:48:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:06.624 00:48:18 -- scripts/common.sh@335 -- # IFS=.-: 00:14:06.624 00:48:18 -- scripts/common.sh@335 -- # read -ra ver1 00:14:06.624 00:48:18 -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.624 00:48:18 -- scripts/common.sh@336 -- # read -ra ver2 00:14:06.624 00:48:18 -- scripts/common.sh@337 -- # local 'op=<' 00:14:06.624 00:48:18 -- scripts/common.sh@339 -- # ver1_l=2 00:14:06.624 00:48:18 -- scripts/common.sh@340 -- # ver2_l=1 00:14:06.624 00:48:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:06.624 00:48:18 -- scripts/common.sh@343 -- # case "$op" in 00:14:06.624 00:48:18 -- scripts/common.sh@344 -- # : 1 00:14:06.624 00:48:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:06.624 00:48:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.624 00:48:18 -- scripts/common.sh@364 -- # decimal 1 00:14:06.624 00:48:18 -- scripts/common.sh@352 -- # local d=1 00:14:06.624 00:48:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.624 00:48:18 -- scripts/common.sh@354 -- # echo 1 00:14:06.624 00:48:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:06.624 00:48:18 -- scripts/common.sh@365 -- # decimal 2 00:14:06.624 00:48:18 -- scripts/common.sh@352 -- # local d=2 00:14:06.624 00:48:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.624 00:48:18 -- scripts/common.sh@354 -- # echo 2 00:14:06.624 00:48:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:06.625 00:48:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:06.625 00:48:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:06.625 00:48:18 -- scripts/common.sh@367 -- # return 0 00:14:06.625 00:48:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.625 00:48:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.625 --rc genhtml_branch_coverage=1 00:14:06.625 --rc genhtml_function_coverage=1 00:14:06.625 --rc genhtml_legend=1 00:14:06.625 --rc geninfo_all_blocks=1 00:14:06.625 --rc geninfo_unexecuted_blocks=1 00:14:06.625 00:14:06.625 ' 00:14:06.625 00:48:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.625 --rc genhtml_branch_coverage=1 00:14:06.625 --rc genhtml_function_coverage=1 00:14:06.625 --rc genhtml_legend=1 00:14:06.625 --rc geninfo_all_blocks=1 00:14:06.625 --rc geninfo_unexecuted_blocks=1 00:14:06.625 00:14:06.625 ' 00:14:06.625 00:48:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.625 --rc genhtml_branch_coverage=1 00:14:06.625 --rc genhtml_function_coverage=1 00:14:06.625 --rc genhtml_legend=1 00:14:06.625 --rc geninfo_all_blocks=1 00:14:06.625 --rc geninfo_unexecuted_blocks=1 00:14:06.625 00:14:06.625 ' 00:14:06.625 00:48:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.625 --rc genhtml_branch_coverage=1 00:14:06.625 --rc genhtml_function_coverage=1 00:14:06.625 --rc genhtml_legend=1 00:14:06.625 --rc geninfo_all_blocks=1 00:14:06.625 --rc geninfo_unexecuted_blocks=1 00:14:06.625 00:14:06.625 ' 00:14:06.625 00:48:18 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:06.625 00:48:18 -- nvmf/common.sh@7 -- # uname -s 00:14:06.625 00:48:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.625 00:48:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.625 00:48:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.625 00:48:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.625 00:48:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.625 00:48:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.625 00:48:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.625 00:48:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.625 00:48:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.625 00:48:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.625 00:48:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:14:06.625 00:48:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:14:06.625 00:48:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.625 00:48:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.625 00:48:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:06.625 00:48:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.625 00:48:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.625 00:48:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.625 00:48:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.625 00:48:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.625 00:48:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.625 00:48:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.625 00:48:18 -- paths/export.sh@5 -- # export PATH 00:14:06.625 00:48:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.625 00:48:18 -- nvmf/common.sh@46 -- # : 0 00:14:06.625 00:48:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:06.625 00:48:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:06.625 00:48:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:06.625 00:48:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.625 00:48:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.625 00:48:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:06.625 00:48:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:06.625 00:48:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:06.625 00:48:18 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.625 00:48:18 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.625 00:48:18 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:06.625 00:48:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:06.625 00:48:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.625 00:48:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:06.625 00:48:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:06.625 00:48:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:06.625 00:48:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.625 00:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.625 00:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.625 00:48:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:06.625 00:48:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:06.625 00:48:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:06.625 00:48:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:06.625 00:48:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:06.625 00:48:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:06.625 00:48:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.625 00:48:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.625 00:48:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:06.625 00:48:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:06.625 00:48:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:06.625 00:48:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:06.625 00:48:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:06.625 00:48:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.625 00:48:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:06.625 00:48:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:06.625 00:48:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:06.625 00:48:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:06.625 00:48:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:06.625 00:48:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:06.625 Cannot find device "nvmf_tgt_br" 00:14:06.625 00:48:18 -- nvmf/common.sh@154 -- # true 00:14:06.625 00:48:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.625 Cannot find device "nvmf_tgt_br2" 00:14:06.625 00:48:19 -- nvmf/common.sh@155 -- # true 00:14:06.625 00:48:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:06.625 00:48:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:06.625 Cannot find device "nvmf_tgt_br" 00:14:06.625 00:48:19 -- nvmf/common.sh@157 -- # true 00:14:06.625 00:48:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:06.625 Cannot find device "nvmf_tgt_br2" 00:14:06.625 00:48:19 -- nvmf/common.sh@158 -- # true 00:14:06.625 00:48:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:06.625 00:48:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:06.625 00:48:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.625 00:48:19 -- nvmf/common.sh@161 -- # true 00:14:06.625 00:48:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.625 00:48:19 -- nvmf/common.sh@162 -- # true 00:14:06.625 00:48:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.625 00:48:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.625 00:48:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.625 00:48:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.625 00:48:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.625 00:48:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.885 00:48:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.885 00:48:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.885 00:48:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.885 00:48:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:06.885 00:48:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:06.885 00:48:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:06.885 00:48:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:06.885 00:48:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.885 00:48:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.885 00:48:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.885 00:48:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:06.885 00:48:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:06.885 00:48:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.885 00:48:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.885 00:48:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.885 00:48:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.885 00:48:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.885 00:48:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:06.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:14:06.885 00:14:06.885 --- 10.0.0.2 ping statistics --- 00:14:06.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.885 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:14:06.885 00:48:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:06.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:14:06.885 00:14:06.885 --- 10.0.0.3 ping statistics --- 00:14:06.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.885 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:06.885 00:48:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:06.885 00:14:06.885 --- 10.0.0.1 ping statistics --- 00:14:06.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.885 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:06.885 00:48:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.885 00:48:19 -- nvmf/common.sh@421 -- # return 0 00:14:06.885 00:48:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.885 00:48:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.885 00:48:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.885 00:48:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.885 00:48:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.885 00:48:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.885 00:48:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.885 00:48:19 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:06.885 00:48:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.885 00:48:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.885 00:48:19 -- common/autotest_common.sh@10 -- # set +x 00:14:06.885 00:48:19 -- nvmf/common.sh@469 -- # nvmfpid=83768 00:14:06.885 00:48:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:06.885 00:48:19 -- nvmf/common.sh@470 -- # waitforlisten 83768 00:14:06.885 00:48:19 -- common/autotest_common.sh@829 -- # '[' -z 83768 ']' 00:14:06.885 00:48:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.885 00:48:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.885 00:48:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.885 00:48:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.885 00:48:19 -- common/autotest_common.sh@10 -- # set +x 00:14:06.885 [2024-12-03 00:48:19.353593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.885 [2024-12-03 00:48:19.353679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.144 [2024-12-03 00:48:19.492498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.144 [2024-12-03 00:48:19.558445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:07.144 [2024-12-03 00:48:19.558594] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.144 [2024-12-03 00:48:19.558607] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.144 [2024-12-03 00:48:19.558616] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.144 [2024-12-03 00:48:19.558655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.079 00:48:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.079 00:48:20 -- common/autotest_common.sh@862 -- # return 0 00:14:08.079 00:48:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:08.079 00:48:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.079 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:14:08.079 00:48:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.079 [2024-12-03 00:48:20.509784] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:08.079 00:48:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:08.079 00:48:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.079 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:14:08.079 ************************************ 00:14:08.079 START TEST lvs_grow_clean 00:14:08.079 ************************************ 00:14:08.079 00:48:20 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:08.079 00:48:20 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:08.338 00:48:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:08.338 00:48:20 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:08.905 00:48:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e32490db-cadf-498d-b94b-059e824f812b 00:14:08.905 00:48:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:08.905 00:48:21 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:08.905 00:48:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:08.905 00:48:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:08.905 00:48:21 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e32490db-cadf-498d-b94b-059e824f812b lvol 150 00:14:09.163 00:48:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=06b2f4db-0389-4b56-9c65-d445be3e2572 00:14:09.163 00:48:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.163 00:48:21 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:09.421 [2024-12-03 00:48:21.836338] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:09.421 [2024-12-03 00:48:21.836394] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:09.421 true 00:14:09.421 00:48:21 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:09.421 00:48:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:09.680 00:48:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:09.680 00:48:22 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.939 00:48:22 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 06b2f4db-0389-4b56-9c65-d445be3e2572 00:14:10.197 00:48:22 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:10.456 [2024-12-03 00:48:22.776879] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.456 00:48:22 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.715 00:48:23 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:10.715 00:48:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83924 00:14:10.715 00:48:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.715 00:48:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83924 /var/tmp/bdevperf.sock 00:14:10.715 00:48:23 -- common/autotest_common.sh@829 -- # '[' -z 83924 ']' 00:14:10.715 00:48:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.715 00:48:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.715 00:48:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.715 00:48:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.715 00:48:23 -- common/autotest_common.sh@10 -- # set +x 00:14:10.715 [2024-12-03 00:48:23.080048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:10.716 [2024-12-03 00:48:23.080109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83924 ] 00:14:10.716 [2024-12-03 00:48:23.221391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.975 [2024-12-03 00:48:23.295920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.912 00:48:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.912 00:48:24 -- common/autotest_common.sh@862 -- # return 0 00:14:11.912 00:48:24 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:11.912 Nvme0n1 00:14:11.912 00:48:24 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:12.171 [ 00:14:12.171 { 00:14:12.171 "aliases": [ 00:14:12.171 "06b2f4db-0389-4b56-9c65-d445be3e2572" 00:14:12.171 ], 00:14:12.171 "assigned_rate_limits": { 00:14:12.171 "r_mbytes_per_sec": 0, 00:14:12.171 "rw_ios_per_sec": 0, 00:14:12.171 "rw_mbytes_per_sec": 0, 00:14:12.171 "w_mbytes_per_sec": 0 00:14:12.171 }, 00:14:12.171 "block_size": 4096, 00:14:12.171 "claimed": false, 00:14:12.171 "driver_specific": { 00:14:12.171 "mp_policy": "active_passive", 00:14:12.171 "nvme": [ 00:14:12.171 { 00:14:12.171 "ctrlr_data": { 00:14:12.171 "ana_reporting": false, 00:14:12.171 "cntlid": 1, 00:14:12.171 "firmware_revision": "24.01.1", 00:14:12.171 "model_number": "SPDK bdev Controller", 00:14:12.171 "multi_ctrlr": true, 00:14:12.171 "oacs": { 00:14:12.171 "firmware": 0, 00:14:12.171 "format": 0, 00:14:12.171 "ns_manage": 0, 00:14:12.171 "security": 0 00:14:12.171 }, 00:14:12.171 "serial_number": "SPDK0", 00:14:12.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:12.171 "vendor_id": "0x8086" 00:14:12.171 }, 00:14:12.171 "ns_data": { 00:14:12.171 "can_share": true, 00:14:12.171 "id": 1 00:14:12.171 }, 00:14:12.171 "trid": { 00:14:12.171 "adrfam": "IPv4", 00:14:12.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:12.171 "traddr": "10.0.0.2", 00:14:12.171 "trsvcid": "4420", 00:14:12.171 "trtype": "TCP" 00:14:12.171 }, 00:14:12.171 "vs": { 00:14:12.171 "nvme_version": "1.3" 00:14:12.171 } 00:14:12.171 } 00:14:12.171 ] 00:14:12.171 }, 00:14:12.171 "name": "Nvme0n1", 00:14:12.171 "num_blocks": 38912, 00:14:12.171 "product_name": "NVMe disk", 00:14:12.171 "supported_io_types": { 00:14:12.171 "abort": true, 00:14:12.171 "compare": true, 00:14:12.171 "compare_and_write": true, 00:14:12.171 "flush": true, 00:14:12.171 "nvme_admin": true, 00:14:12.171 "nvme_io": true, 00:14:12.171 "read": true, 00:14:12.171 "reset": true, 00:14:12.171 "unmap": true, 00:14:12.171 "write": true, 00:14:12.171 "write_zeroes": true 00:14:12.171 }, 00:14:12.171 "uuid": "06b2f4db-0389-4b56-9c65-d445be3e2572", 00:14:12.171 "zoned": false 00:14:12.171 } 00:14:12.171 ] 00:14:12.171 00:48:24 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:12.171 00:48:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83977 00:14:12.171 00:48:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:12.171 Running I/O for 10 seconds... 00:14:13.546 Latency(us) 00:14:13.546 [2024-12-03T00:48:26.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.546 [2024-12-03T00:48:26.061Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.546 Nvme0n1 : 1.00 9043.00 35.32 0.00 0.00 0.00 0.00 0.00 00:14:13.546 [2024-12-03T00:48:26.061Z] =================================================================================================================== 00:14:13.546 [2024-12-03T00:48:26.061Z] Total : 9043.00 35.32 0.00 0.00 0.00 0.00 0.00 00:14:13.546 00:14:14.113 00:48:26 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e32490db-cadf-498d-b94b-059e824f812b 00:14:14.371 [2024-12-03T00:48:26.886Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.371 Nvme0n1 : 2.00 9059.00 35.39 0.00 0.00 0.00 0.00 0.00 00:14:14.371 [2024-12-03T00:48:26.886Z] =================================================================================================================== 00:14:14.371 [2024-12-03T00:48:26.886Z] Total : 9059.00 35.39 0.00 0.00 0.00 0.00 0.00 00:14:14.371 00:14:14.630 true 00:14:14.630 00:48:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:14.630 00:48:26 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:14.888 00:48:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:14.888 00:48:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:14.888 00:48:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 83977 00:14:15.147 [2024-12-03T00:48:27.662Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.147 Nvme0n1 : 3.00 9017.67 35.23 0.00 0.00 0.00 0.00 0.00 00:14:15.147 [2024-12-03T00:48:27.662Z] =================================================================================================================== 00:14:15.147 [2024-12-03T00:48:27.662Z] Total : 9017.67 35.23 0.00 0.00 0.00 0.00 0.00 00:14:15.147 00:14:16.525 [2024-12-03T00:48:29.040Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.525 Nvme0n1 : 4.00 8965.50 35.02 0.00 0.00 0.00 0.00 0.00 00:14:16.525 [2024-12-03T00:48:29.040Z] =================================================================================================================== 00:14:16.525 [2024-12-03T00:48:29.040Z] Total : 8965.50 35.02 0.00 0.00 0.00 0.00 0.00 00:14:16.525 00:14:17.461 [2024-12-03T00:48:29.976Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.462 Nvme0n1 : 5.00 8931.20 34.89 0.00 0.00 0.00 0.00 0.00 00:14:17.462 [2024-12-03T00:48:29.977Z] =================================================================================================================== 00:14:17.462 [2024-12-03T00:48:29.977Z] Total : 8931.20 34.89 0.00 0.00 0.00 0.00 0.00 00:14:17.462 00:14:18.442 [2024-12-03T00:48:30.957Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.442 Nvme0n1 : 6.00 8792.50 34.35 0.00 0.00 0.00 0.00 0.00 00:14:18.442 [2024-12-03T00:48:30.957Z] =================================================================================================================== 00:14:18.442 [2024-12-03T00:48:30.957Z] Total : 8792.50 34.35 0.00 0.00 0.00 0.00 0.00 00:14:18.442 00:14:19.454 [2024-12-03T00:48:31.969Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.454 Nvme0n1 : 7.00 8739.71 34.14 0.00 0.00 0.00 0.00 0.00 00:14:19.454 [2024-12-03T00:48:31.969Z] =================================================================================================================== 00:14:19.454 [2024-12-03T00:48:31.969Z] Total : 8739.71 34.14 0.00 0.00 0.00 0.00 0.00 00:14:19.454 00:14:20.389 [2024-12-03T00:48:32.904Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.389 Nvme0n1 : 8.00 8734.88 34.12 0.00 0.00 0.00 0.00 0.00 00:14:20.389 [2024-12-03T00:48:32.904Z] =================================================================================================================== 00:14:20.389 [2024-12-03T00:48:32.904Z] Total : 8734.88 34.12 0.00 0.00 0.00 0.00 0.00 00:14:20.389 00:14:21.324 [2024-12-03T00:48:33.839Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.324 Nvme0n1 : 9.00 8735.89 34.12 0.00 0.00 0.00 0.00 0.00 00:14:21.324 [2024-12-03T00:48:33.839Z] =================================================================================================================== 00:14:21.324 [2024-12-03T00:48:33.839Z] Total : 8735.89 34.12 0.00 0.00 0.00 0.00 0.00 00:14:21.324 00:14:22.261 [2024-12-03T00:48:34.776Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.261 Nvme0n1 : 10.00 8720.70 34.07 0.00 0.00 0.00 0.00 0.00 00:14:22.261 [2024-12-03T00:48:34.776Z] =================================================================================================================== 00:14:22.261 [2024-12-03T00:48:34.776Z] Total : 8720.70 34.07 0.00 0.00 0.00 0.00 0.00 00:14:22.261 00:14:22.261 00:14:22.261 Latency(us) 00:14:22.261 [2024-12-03T00:48:34.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.261 [2024-12-03T00:48:34.776Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.261 Nvme0n1 : 10.01 8727.68 34.09 0.00 0.00 14661.62 4766.25 87222.46 00:14:22.261 [2024-12-03T00:48:34.776Z] =================================================================================================================== 00:14:22.261 [2024-12-03T00:48:34.776Z] Total : 8727.68 34.09 0.00 0.00 14661.62 4766.25 87222.46 00:14:22.261 0 00:14:22.261 00:48:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83924 00:14:22.261 00:48:34 -- common/autotest_common.sh@936 -- # '[' -z 83924 ']' 00:14:22.261 00:48:34 -- common/autotest_common.sh@940 -- # kill -0 83924 00:14:22.261 00:48:34 -- common/autotest_common.sh@941 -- # uname 00:14:22.261 00:48:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.261 00:48:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83924 00:14:22.261 00:48:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:22.261 00:48:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:22.261 killing process with pid 83924 00:14:22.261 00:48:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83924' 00:14:22.261 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.261 00:14:22.261 Latency(us) 00:14:22.261 [2024-12-03T00:48:34.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.261 [2024-12-03T00:48:34.776Z] =================================================================================================================== 00:14:22.261 [2024-12-03T00:48:34.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.261 00:48:34 -- common/autotest_common.sh@955 -- # kill 83924 00:14:22.261 00:48:34 -- common/autotest_common.sh@960 -- # wait 83924 00:14:22.520 00:48:34 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.779 00:48:35 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:22.779 00:48:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:23.036 00:48:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:23.036 00:48:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:23.036 00:48:35 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:23.294 [2024-12-03 00:48:35.644116] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:23.294 00:48:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:23.294 00:48:35 -- common/autotest_common.sh@650 -- # local es=0 00:14:23.294 00:48:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:23.294 00:48:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.294 00:48:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.294 00:48:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.294 00:48:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.294 00:48:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.294 00:48:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.294 00:48:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.294 00:48:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:23.294 00:48:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:23.553 2024/12/03 00:48:35 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e32490db-cadf-498d-b94b-059e824f812b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:23.553 request: 00:14:23.553 { 00:14:23.553 "method": "bdev_lvol_get_lvstores", 00:14:23.553 "params": { 00:14:23.553 "uuid": "e32490db-cadf-498d-b94b-059e824f812b" 00:14:23.553 } 00:14:23.553 } 00:14:23.553 Got JSON-RPC error response 00:14:23.553 GoRPCClient: error on JSON-RPC call 00:14:23.553 00:48:35 -- common/autotest_common.sh@653 -- # es=1 00:14:23.553 00:48:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.553 00:48:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.553 00:48:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.553 00:48:35 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.812 aio_bdev 00:14:23.812 00:48:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 06b2f4db-0389-4b56-9c65-d445be3e2572 00:14:23.812 00:48:36 -- common/autotest_common.sh@897 -- # local bdev_name=06b2f4db-0389-4b56-9c65-d445be3e2572 00:14:23.812 00:48:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.812 00:48:36 -- common/autotest_common.sh@899 -- # local i 00:14:23.812 00:48:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.812 00:48:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.812 00:48:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:24.071 00:48:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06b2f4db-0389-4b56-9c65-d445be3e2572 -t 2000 00:14:24.330 [ 00:14:24.330 { 00:14:24.330 "aliases": [ 00:14:24.330 "lvs/lvol" 00:14:24.330 ], 00:14:24.330 "assigned_rate_limits": { 00:14:24.330 "r_mbytes_per_sec": 0, 00:14:24.330 "rw_ios_per_sec": 0, 00:14:24.330 "rw_mbytes_per_sec": 0, 00:14:24.330 "w_mbytes_per_sec": 0 00:14:24.330 }, 00:14:24.330 "block_size": 4096, 00:14:24.330 "claimed": false, 00:14:24.330 "driver_specific": { 00:14:24.330 "lvol": { 00:14:24.330 "base_bdev": "aio_bdev", 00:14:24.330 "clone": false, 00:14:24.330 "esnap_clone": false, 00:14:24.330 "lvol_store_uuid": "e32490db-cadf-498d-b94b-059e824f812b", 00:14:24.330 "snapshot": false, 00:14:24.330 "thin_provision": false 00:14:24.330 } 00:14:24.330 }, 00:14:24.330 "name": "06b2f4db-0389-4b56-9c65-d445be3e2572", 00:14:24.330 "num_blocks": 38912, 00:14:24.330 "product_name": "Logical Volume", 00:14:24.330 "supported_io_types": { 00:14:24.330 "abort": false, 00:14:24.330 "compare": false, 00:14:24.330 "compare_and_write": false, 00:14:24.330 "flush": false, 00:14:24.330 "nvme_admin": false, 00:14:24.330 "nvme_io": false, 00:14:24.330 "read": true, 00:14:24.330 "reset": true, 00:14:24.330 "unmap": true, 00:14:24.330 "write": true, 00:14:24.330 "write_zeroes": true 00:14:24.330 }, 00:14:24.330 "uuid": "06b2f4db-0389-4b56-9c65-d445be3e2572", 00:14:24.330 "zoned": false 00:14:24.330 } 00:14:24.330 ] 00:14:24.330 00:48:36 -- common/autotest_common.sh@905 -- # return 0 00:14:24.330 00:48:36 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:24.330 00:48:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:24.589 00:48:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:24.589 00:48:36 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e32490db-cadf-498d-b94b-059e824f812b 00:14:24.589 00:48:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:24.848 00:48:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:24.848 00:48:37 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 06b2f4db-0389-4b56-9c65-d445be3e2572 00:14:25.106 00:48:37 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e32490db-cadf-498d-b94b-059e824f812b 00:14:25.365 00:48:37 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:25.624 00:48:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.883 00:14:25.883 real 0m17.691s 00:14:25.883 user 0m17.070s 00:14:25.883 sys 0m2.042s 00:14:25.883 00:48:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:25.883 00:48:38 -- common/autotest_common.sh@10 -- # set +x 00:14:25.883 ************************************ 00:14:25.883 END TEST lvs_grow_clean 00:14:25.883 ************************************ 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:25.883 00:48:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.883 00:48:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.883 00:48:38 -- common/autotest_common.sh@10 -- # set +x 00:14:25.883 ************************************ 00:14:25.883 START TEST lvs_grow_dirty 00:14:25.883 ************************************ 00:14:25.883 00:48:38 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.883 00:48:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.142 00:48:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:26.142 00:48:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:26.401 00:48:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f33a608e-0948-4000-b050-30b691dda0e7 00:14:26.401 00:48:38 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:26.401 00:48:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:26.659 00:48:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:26.659 00:48:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:26.659 00:48:39 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f33a608e-0948-4000-b050-30b691dda0e7 lvol 150 00:14:26.918 00:48:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=765b4283-708b-42d1-9ded-d4993081d62e 00:14:26.918 00:48:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:26.918 00:48:39 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:27.177 [2024-12-03 00:48:39.541176] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:27.177 [2024-12-03 00:48:39.541237] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:27.177 true 00:14:27.177 00:48:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:27.177 00:48:39 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:27.437 00:48:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:27.437 00:48:39 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:27.696 00:48:40 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 765b4283-708b-42d1-9ded-d4993081d62e 00:14:27.955 00:48:40 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:28.214 00:48:40 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.214 00:48:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84357 00:14:28.214 00:48:40 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:28.214 00:48:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:28.214 00:48:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84357 /var/tmp/bdevperf.sock 00:14:28.214 00:48:40 -- common/autotest_common.sh@829 -- # '[' -z 84357 ']' 00:14:28.214 00:48:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.214 00:48:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.214 00:48:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.214 00:48:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.214 00:48:40 -- common/autotest_common.sh@10 -- # set +x 00:14:28.473 [2024-12-03 00:48:40.757196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:28.473 [2024-12-03 00:48:40.757300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84357 ] 00:14:28.473 [2024-12-03 00:48:40.885389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.473 [2024-12-03 00:48:40.948836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.410 00:48:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.410 00:48:41 -- common/autotest_common.sh@862 -- # return 0 00:14:29.410 00:48:41 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:29.669 Nvme0n1 00:14:29.669 00:48:41 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:29.669 [ 00:14:29.669 { 00:14:29.669 "aliases": [ 00:14:29.669 "765b4283-708b-42d1-9ded-d4993081d62e" 00:14:29.669 ], 00:14:29.669 "assigned_rate_limits": { 00:14:29.669 "r_mbytes_per_sec": 0, 00:14:29.669 "rw_ios_per_sec": 0, 00:14:29.669 "rw_mbytes_per_sec": 0, 00:14:29.669 "w_mbytes_per_sec": 0 00:14:29.669 }, 00:14:29.669 "block_size": 4096, 00:14:29.669 "claimed": false, 00:14:29.669 "driver_specific": { 00:14:29.669 "mp_policy": "active_passive", 00:14:29.669 "nvme": [ 00:14:29.669 { 00:14:29.669 "ctrlr_data": { 00:14:29.669 "ana_reporting": false, 00:14:29.669 "cntlid": 1, 00:14:29.669 "firmware_revision": "24.01.1", 00:14:29.669 "model_number": "SPDK bdev Controller", 00:14:29.669 "multi_ctrlr": true, 00:14:29.669 "oacs": { 00:14:29.669 "firmware": 0, 00:14:29.669 "format": 0, 00:14:29.669 "ns_manage": 0, 00:14:29.669 "security": 0 00:14:29.669 }, 00:14:29.669 "serial_number": "SPDK0", 00:14:29.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.669 "vendor_id": "0x8086" 00:14:29.669 }, 00:14:29.669 "ns_data": { 00:14:29.669 "can_share": true, 00:14:29.669 "id": 1 00:14:29.669 }, 00:14:29.669 "trid": { 00:14:29.669 "adrfam": "IPv4", 00:14:29.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.669 "traddr": "10.0.0.2", 00:14:29.669 "trsvcid": "4420", 00:14:29.669 "trtype": "TCP" 00:14:29.669 }, 00:14:29.669 "vs": { 00:14:29.669 "nvme_version": "1.3" 00:14:29.669 } 00:14:29.669 } 00:14:29.669 ] 00:14:29.669 }, 00:14:29.669 "name": "Nvme0n1", 00:14:29.669 "num_blocks": 38912, 00:14:29.669 "product_name": "NVMe disk", 00:14:29.669 "supported_io_types": { 00:14:29.669 "abort": true, 00:14:29.669 "compare": true, 00:14:29.669 "compare_and_write": true, 00:14:29.669 "flush": true, 00:14:29.669 "nvme_admin": true, 00:14:29.669 "nvme_io": true, 00:14:29.669 "read": true, 00:14:29.669 "reset": true, 00:14:29.669 "unmap": true, 00:14:29.669 "write": true, 00:14:29.669 "write_zeroes": true 00:14:29.669 }, 00:14:29.669 "uuid": "765b4283-708b-42d1-9ded-d4993081d62e", 00:14:29.669 "zoned": false 00:14:29.669 } 00:14:29.669 ] 00:14:29.669 00:48:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84399 00:14:29.669 00:48:42 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:29.669 00:48:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:29.928 Running I/O for 10 seconds... 00:14:30.863 Latency(us) 00:14:30.863 [2024-12-03T00:48:43.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.863 [2024-12-03T00:48:43.378Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.863 Nvme0n1 : 1.00 9147.00 35.73 0.00 0.00 0.00 0.00 0.00 00:14:30.863 [2024-12-03T00:48:43.378Z] =================================================================================================================== 00:14:30.863 [2024-12-03T00:48:43.378Z] Total : 9147.00 35.73 0.00 0.00 0.00 0.00 0.00 00:14:30.863 00:14:31.800 00:48:44 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:31.800 [2024-12-03T00:48:44.315Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.800 Nvme0n1 : 2.00 8951.00 34.96 0.00 0.00 0.00 0.00 0.00 00:14:31.800 [2024-12-03T00:48:44.315Z] =================================================================================================================== 00:14:31.800 [2024-12-03T00:48:44.315Z] Total : 8951.00 34.96 0.00 0.00 0.00 0.00 0.00 00:14:31.800 00:14:32.059 true 00:14:32.059 00:48:44 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:32.059 00:48:44 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:32.318 00:48:44 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:32.318 00:48:44 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:32.318 00:48:44 -- target/nvmf_lvs_grow.sh@65 -- # wait 84399 00:14:32.884 [2024-12-03T00:48:45.399Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.884 Nvme0n1 : 3.00 8850.33 34.57 0.00 0.00 0.00 0.00 0.00 00:14:32.884 [2024-12-03T00:48:45.399Z] =================================================================================================================== 00:14:32.884 [2024-12-03T00:48:45.399Z] Total : 8850.33 34.57 0.00 0.00 0.00 0.00 0.00 00:14:32.884 00:14:33.820 [2024-12-03T00:48:46.335Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.820 Nvme0n1 : 4.00 8478.75 33.12 0.00 0.00 0.00 0.00 0.00 00:14:33.820 [2024-12-03T00:48:46.335Z] =================================================================================================================== 00:14:33.820 [2024-12-03T00:48:46.335Z] Total : 8478.75 33.12 0.00 0.00 0.00 0.00 0.00 00:14:33.820 00:14:34.756 [2024-12-03T00:48:47.271Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.756 Nvme0n1 : 5.00 8503.00 33.21 0.00 0.00 0.00 0.00 0.00 00:14:34.756 [2024-12-03T00:48:47.271Z] =================================================================================================================== 00:14:34.756 [2024-12-03T00:48:47.271Z] Total : 8503.00 33.21 0.00 0.00 0.00 0.00 0.00 00:14:34.756 00:14:36.134 [2024-12-03T00:48:48.649Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.134 Nvme0n1 : 6.00 8523.00 33.29 0.00 0.00 0.00 0.00 0.00 00:14:36.134 [2024-12-03T00:48:48.649Z] =================================================================================================================== 00:14:36.134 [2024-12-03T00:48:48.649Z] Total : 8523.00 33.29 0.00 0.00 0.00 0.00 0.00 00:14:36.134 00:14:37.071 [2024-12-03T00:48:49.586Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.071 Nvme0n1 : 7.00 8534.57 33.34 0.00 0.00 0.00 0.00 0.00 00:14:37.071 [2024-12-03T00:48:49.586Z] =================================================================================================================== 00:14:37.071 [2024-12-03T00:48:49.586Z] Total : 8534.57 33.34 0.00 0.00 0.00 0.00 0.00 00:14:37.071 00:14:38.008 [2024-12-03T00:48:50.523Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.008 Nvme0n1 : 8.00 8476.00 33.11 0.00 0.00 0.00 0.00 0.00 00:14:38.008 [2024-12-03T00:48:50.524Z] =================================================================================================================== 00:14:38.009 [2024-12-03T00:48:50.524Z] Total : 8476.00 33.11 0.00 0.00 0.00 0.00 0.00 00:14:38.009 00:14:38.945 [2024-12-03T00:48:51.460Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.945 Nvme0n1 : 9.00 8482.22 33.13 0.00 0.00 0.00 0.00 0.00 00:14:38.945 [2024-12-03T00:48:51.460Z] =================================================================================================================== 00:14:38.945 [2024-12-03T00:48:51.460Z] Total : 8482.22 33.13 0.00 0.00 0.00 0.00 0.00 00:14:38.945 00:14:39.880 [2024-12-03T00:48:52.395Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.880 Nvme0n1 : 10.00 8477.20 33.11 0.00 0.00 0.00 0.00 0.00 00:14:39.880 [2024-12-03T00:48:52.395Z] =================================================================================================================== 00:14:39.880 [2024-12-03T00:48:52.395Z] Total : 8477.20 33.11 0.00 0.00 0.00 0.00 0.00 00:14:39.880 00:14:39.880 00:14:39.880 Latency(us) 00:14:39.880 [2024-12-03T00:48:52.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.880 [2024-12-03T00:48:52.396Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.881 Nvme0n1 : 10.01 8481.08 33.13 0.00 0.00 15083.15 3708.74 189696.93 00:14:39.881 [2024-12-03T00:48:52.396Z] =================================================================================================================== 00:14:39.881 [2024-12-03T00:48:52.396Z] Total : 8481.08 33.13 0.00 0.00 15083.15 3708.74 189696.93 00:14:39.881 0 00:14:39.881 00:48:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84357 00:14:39.881 00:48:52 -- common/autotest_common.sh@936 -- # '[' -z 84357 ']' 00:14:39.881 00:48:52 -- common/autotest_common.sh@940 -- # kill -0 84357 00:14:39.881 00:48:52 -- common/autotest_common.sh@941 -- # uname 00:14:39.881 00:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.881 00:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84357 00:14:39.881 00:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:39.881 00:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:39.881 killing process with pid 84357 00:14:39.881 00:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84357' 00:14:39.881 00:48:52 -- common/autotest_common.sh@955 -- # kill 84357 00:14:39.881 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.881 00:14:39.881 Latency(us) 00:14:39.881 [2024-12-03T00:48:52.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.881 [2024-12-03T00:48:52.396Z] =================================================================================================================== 00:14:39.881 [2024-12-03T00:48:52.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.881 00:48:52 -- common/autotest_common.sh@960 -- # wait 84357 00:14:40.140 00:48:52 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:40.398 00:48:52 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:40.398 00:48:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:40.657 00:48:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:40.657 00:48:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:40.657 00:48:53 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83768 00:14:40.657 00:48:53 -- target/nvmf_lvs_grow.sh@74 -- # wait 83768 00:14:40.657 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83768 Killed "${NVMF_APP[@]}" "$@" 00:14:40.657 00:48:53 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:40.657 00:48:53 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:40.657 00:48:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:40.657 00:48:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.657 00:48:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.657 00:48:53 -- nvmf/common.sh@469 -- # nvmfpid=84555 00:14:40.658 00:48:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:40.658 00:48:53 -- nvmf/common.sh@470 -- # waitforlisten 84555 00:14:40.658 00:48:53 -- common/autotest_common.sh@829 -- # '[' -z 84555 ']' 00:14:40.658 00:48:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.658 00:48:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.658 00:48:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.658 00:48:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.658 00:48:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.658 [2024-12-03 00:48:53.089166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:40.658 [2024-12-03 00:48:53.089266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.916 [2024-12-03 00:48:53.223488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.916 [2024-12-03 00:48:53.292100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:40.916 [2024-12-03 00:48:53.292258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.916 [2024-12-03 00:48:53.292273] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.916 [2024-12-03 00:48:53.292283] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.916 [2024-12-03 00:48:53.292315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.524 00:48:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.524 00:48:54 -- common/autotest_common.sh@862 -- # return 0 00:14:41.524 00:48:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:41.524 00:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.524 00:48:54 -- common/autotest_common.sh@10 -- # set +x 00:14:41.785 00:48:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.785 00:48:54 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.785 [2024-12-03 00:48:54.258146] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:41.785 [2024-12-03 00:48:54.258540] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:41.785 [2024-12-03 00:48:54.258702] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:42.043 00:48:54 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:42.043 00:48:54 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 765b4283-708b-42d1-9ded-d4993081d62e 00:14:42.043 00:48:54 -- common/autotest_common.sh@897 -- # local bdev_name=765b4283-708b-42d1-9ded-d4993081d62e 00:14:42.043 00:48:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:42.043 00:48:54 -- common/autotest_common.sh@899 -- # local i 00:14:42.043 00:48:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:42.043 00:48:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:42.043 00:48:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:42.043 00:48:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 765b4283-708b-42d1-9ded-d4993081d62e -t 2000 00:14:42.301 [ 00:14:42.301 { 00:14:42.301 "aliases": [ 00:14:42.301 "lvs/lvol" 00:14:42.301 ], 00:14:42.301 "assigned_rate_limits": { 00:14:42.301 "r_mbytes_per_sec": 0, 00:14:42.301 "rw_ios_per_sec": 0, 00:14:42.301 "rw_mbytes_per_sec": 0, 00:14:42.301 "w_mbytes_per_sec": 0 00:14:42.301 }, 00:14:42.301 "block_size": 4096, 00:14:42.301 "claimed": false, 00:14:42.301 "driver_specific": { 00:14:42.301 "lvol": { 00:14:42.301 "base_bdev": "aio_bdev", 00:14:42.301 "clone": false, 00:14:42.301 "esnap_clone": false, 00:14:42.301 "lvol_store_uuid": "f33a608e-0948-4000-b050-30b691dda0e7", 00:14:42.301 "snapshot": false, 00:14:42.301 "thin_provision": false 00:14:42.301 } 00:14:42.301 }, 00:14:42.301 "name": "765b4283-708b-42d1-9ded-d4993081d62e", 00:14:42.301 "num_blocks": 38912, 00:14:42.301 "product_name": "Logical Volume", 00:14:42.301 "supported_io_types": { 00:14:42.301 "abort": false, 00:14:42.301 "compare": false, 00:14:42.301 "compare_and_write": false, 00:14:42.301 "flush": false, 00:14:42.301 "nvme_admin": false, 00:14:42.301 "nvme_io": false, 00:14:42.301 "read": true, 00:14:42.301 "reset": true, 00:14:42.302 "unmap": true, 00:14:42.302 "write": true, 00:14:42.302 "write_zeroes": true 00:14:42.302 }, 00:14:42.302 "uuid": "765b4283-708b-42d1-9ded-d4993081d62e", 00:14:42.302 "zoned": false 00:14:42.302 } 00:14:42.302 ] 00:14:42.302 00:48:54 -- common/autotest_common.sh@905 -- # return 0 00:14:42.302 00:48:54 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:42.302 00:48:54 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:42.559 00:48:54 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:42.559 00:48:54 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:42.559 00:48:54 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:42.816 00:48:55 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:42.816 00:48:55 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:43.073 [2024-12-03 00:48:55.439498] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:43.073 00:48:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:43.073 00:48:55 -- common/autotest_common.sh@650 -- # local es=0 00:14:43.073 00:48:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:43.073 00:48:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.073 00:48:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.073 00:48:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.073 00:48:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.073 00:48:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.073 00:48:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.073 00:48:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.073 00:48:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:43.073 00:48:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:43.330 2024/12/03 00:48:55 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f33a608e-0948-4000-b050-30b691dda0e7], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:43.330 request: 00:14:43.330 { 00:14:43.330 "method": "bdev_lvol_get_lvstores", 00:14:43.330 "params": { 00:14:43.330 "uuid": "f33a608e-0948-4000-b050-30b691dda0e7" 00:14:43.330 } 00:14:43.330 } 00:14:43.330 Got JSON-RPC error response 00:14:43.330 GoRPCClient: error on JSON-RPC call 00:14:43.330 00:48:55 -- common/autotest_common.sh@653 -- # es=1 00:14:43.330 00:48:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.330 00:48:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.330 00:48:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.330 00:48:55 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.587 aio_bdev 00:14:43.587 00:48:55 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 765b4283-708b-42d1-9ded-d4993081d62e 00:14:43.587 00:48:55 -- common/autotest_common.sh@897 -- # local bdev_name=765b4283-708b-42d1-9ded-d4993081d62e 00:14:43.587 00:48:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.587 00:48:55 -- common/autotest_common.sh@899 -- # local i 00:14:43.587 00:48:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.587 00:48:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.587 00:48:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.844 00:48:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 765b4283-708b-42d1-9ded-d4993081d62e -t 2000 00:14:43.844 [ 00:14:43.844 { 00:14:43.844 "aliases": [ 00:14:43.844 "lvs/lvol" 00:14:43.844 ], 00:14:43.844 "assigned_rate_limits": { 00:14:43.844 "r_mbytes_per_sec": 0, 00:14:43.844 "rw_ios_per_sec": 0, 00:14:43.844 "rw_mbytes_per_sec": 0, 00:14:43.844 "w_mbytes_per_sec": 0 00:14:43.844 }, 00:14:43.844 "block_size": 4096, 00:14:43.844 "claimed": false, 00:14:43.844 "driver_specific": { 00:14:43.844 "lvol": { 00:14:43.844 "base_bdev": "aio_bdev", 00:14:43.844 "clone": false, 00:14:43.844 "esnap_clone": false, 00:14:43.844 "lvol_store_uuid": "f33a608e-0948-4000-b050-30b691dda0e7", 00:14:43.844 "snapshot": false, 00:14:43.844 "thin_provision": false 00:14:43.844 } 00:14:43.844 }, 00:14:43.844 "name": "765b4283-708b-42d1-9ded-d4993081d62e", 00:14:43.844 "num_blocks": 38912, 00:14:43.844 "product_name": "Logical Volume", 00:14:43.844 "supported_io_types": { 00:14:43.844 "abort": false, 00:14:43.844 "compare": false, 00:14:43.845 "compare_and_write": false, 00:14:43.845 "flush": false, 00:14:43.845 "nvme_admin": false, 00:14:43.845 "nvme_io": false, 00:14:43.845 "read": true, 00:14:43.845 "reset": true, 00:14:43.845 "unmap": true, 00:14:43.845 "write": true, 00:14:43.845 "write_zeroes": true 00:14:43.845 }, 00:14:43.845 "uuid": "765b4283-708b-42d1-9ded-d4993081d62e", 00:14:43.845 "zoned": false 00:14:43.845 } 00:14:43.845 ] 00:14:43.845 00:48:56 -- common/autotest_common.sh@905 -- # return 0 00:14:43.845 00:48:56 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:43.845 00:48:56 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:44.101 00:48:56 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.101 00:48:56 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:44.101 00:48:56 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:44.358 00:48:56 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:44.358 00:48:56 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 765b4283-708b-42d1-9ded-d4993081d62e 00:14:44.615 00:48:57 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f33a608e-0948-4000-b050-30b691dda0e7 00:14:44.872 00:48:57 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.129 00:48:57 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:45.694 00:14:45.694 real 0m19.633s 00:14:45.694 user 0m39.875s 00:14:45.694 sys 0m8.335s 00:14:45.694 00:48:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:45.694 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:14:45.694 ************************************ 00:14:45.694 END TEST lvs_grow_dirty 00:14:45.694 ************************************ 00:14:45.694 00:48:57 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:45.694 00:48:57 -- common/autotest_common.sh@806 -- # type=--id 00:14:45.694 00:48:57 -- common/autotest_common.sh@807 -- # id=0 00:14:45.694 00:48:57 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:45.694 00:48:57 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:45.694 00:48:57 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:45.694 00:48:57 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:45.694 00:48:57 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:45.694 00:48:57 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:45.694 nvmf_trace.0 00:14:45.694 00:48:57 -- common/autotest_common.sh@821 -- # return 0 00:14:45.694 00:48:57 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:45.694 00:48:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:45.694 00:48:57 -- nvmf/common.sh@116 -- # sync 00:14:46.630 00:48:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:46.630 00:48:58 -- nvmf/common.sh@119 -- # set +e 00:14:46.630 00:48:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:46.630 00:48:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:46.630 rmmod nvme_tcp 00:14:46.630 rmmod nvme_fabrics 00:14:46.630 rmmod nvme_keyring 00:14:46.630 00:48:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:46.630 00:48:58 -- nvmf/common.sh@123 -- # set -e 00:14:46.630 00:48:58 -- nvmf/common.sh@124 -- # return 0 00:14:46.630 00:48:58 -- nvmf/common.sh@477 -- # '[' -n 84555 ']' 00:14:46.630 00:48:58 -- nvmf/common.sh@478 -- # killprocess 84555 00:14:46.630 00:48:58 -- common/autotest_common.sh@936 -- # '[' -z 84555 ']' 00:14:46.630 00:48:58 -- common/autotest_common.sh@940 -- # kill -0 84555 00:14:46.630 00:48:58 -- common/autotest_common.sh@941 -- # uname 00:14:46.630 00:48:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.630 00:48:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84555 00:14:46.630 00:48:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.630 00:48:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.630 killing process with pid 84555 00:14:46.630 00:48:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84555' 00:14:46.630 00:48:59 -- common/autotest_common.sh@955 -- # kill 84555 00:14:46.630 00:48:59 -- common/autotest_common.sh@960 -- # wait 84555 00:14:46.889 00:48:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:46.889 00:48:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:46.889 00:48:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:46.889 00:48:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.889 00:48:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:46.889 00:48:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.889 00:48:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.889 00:48:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.889 00:48:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:46.889 00:14:46.889 real 0m40.574s 00:14:46.889 user 1m3.686s 00:14:46.889 sys 0m11.932s 00:14:46.889 00:48:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.889 00:48:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.889 ************************************ 00:14:46.889 END TEST nvmf_lvs_grow 00:14:46.889 ************************************ 00:14:46.889 00:48:59 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:46.889 00:48:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.889 00:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.889 00:48:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.889 ************************************ 00:14:46.889 START TEST nvmf_bdev_io_wait 00:14:46.889 ************************************ 00:14:46.889 00:48:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.149 * Looking for test storage... 00:14:47.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.149 00:48:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:47.149 00:48:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:47.149 00:48:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:47.149 00:48:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:47.149 00:48:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:47.149 00:48:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:47.149 00:48:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:47.149 00:48:59 -- scripts/common.sh@335 -- # IFS=.-: 00:14:47.149 00:48:59 -- scripts/common.sh@335 -- # read -ra ver1 00:14:47.149 00:48:59 -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.149 00:48:59 -- scripts/common.sh@336 -- # read -ra ver2 00:14:47.149 00:48:59 -- scripts/common.sh@337 -- # local 'op=<' 00:14:47.149 00:48:59 -- scripts/common.sh@339 -- # ver1_l=2 00:14:47.149 00:48:59 -- scripts/common.sh@340 -- # ver2_l=1 00:14:47.149 00:48:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:47.149 00:48:59 -- scripts/common.sh@343 -- # case "$op" in 00:14:47.149 00:48:59 -- scripts/common.sh@344 -- # : 1 00:14:47.149 00:48:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:47.149 00:48:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.149 00:48:59 -- scripts/common.sh@364 -- # decimal 1 00:14:47.149 00:48:59 -- scripts/common.sh@352 -- # local d=1 00:14:47.149 00:48:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.149 00:48:59 -- scripts/common.sh@354 -- # echo 1 00:14:47.149 00:48:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:47.149 00:48:59 -- scripts/common.sh@365 -- # decimal 2 00:14:47.149 00:48:59 -- scripts/common.sh@352 -- # local d=2 00:14:47.149 00:48:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.149 00:48:59 -- scripts/common.sh@354 -- # echo 2 00:14:47.149 00:48:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:47.149 00:48:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:47.149 00:48:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:47.149 00:48:59 -- scripts/common.sh@367 -- # return 0 00:14:47.149 00:48:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.149 00:48:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.149 --rc genhtml_branch_coverage=1 00:14:47.149 --rc genhtml_function_coverage=1 00:14:47.149 --rc genhtml_legend=1 00:14:47.149 --rc geninfo_all_blocks=1 00:14:47.149 --rc geninfo_unexecuted_blocks=1 00:14:47.149 00:14:47.149 ' 00:14:47.149 00:48:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.149 --rc genhtml_branch_coverage=1 00:14:47.149 --rc genhtml_function_coverage=1 00:14:47.149 --rc genhtml_legend=1 00:14:47.149 --rc geninfo_all_blocks=1 00:14:47.149 --rc geninfo_unexecuted_blocks=1 00:14:47.149 00:14:47.149 ' 00:14:47.149 00:48:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.149 --rc genhtml_branch_coverage=1 00:14:47.149 --rc genhtml_function_coverage=1 00:14:47.149 --rc genhtml_legend=1 00:14:47.149 --rc geninfo_all_blocks=1 00:14:47.149 --rc geninfo_unexecuted_blocks=1 00:14:47.149 00:14:47.149 ' 00:14:47.149 00:48:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.149 --rc genhtml_branch_coverage=1 00:14:47.149 --rc genhtml_function_coverage=1 00:14:47.149 --rc genhtml_legend=1 00:14:47.149 --rc geninfo_all_blocks=1 00:14:47.149 --rc geninfo_unexecuted_blocks=1 00:14:47.149 00:14:47.149 ' 00:14:47.149 00:48:59 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.149 00:48:59 -- nvmf/common.sh@7 -- # uname -s 00:14:47.149 00:48:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.149 00:48:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.149 00:48:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.149 00:48:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.149 00:48:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.149 00:48:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.149 00:48:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.150 00:48:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.150 00:48:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.150 00:48:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.150 00:48:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:14:47.150 00:48:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:14:47.150 00:48:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.150 00:48:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.150 00:48:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.150 00:48:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.150 00:48:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.150 00:48:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.150 00:48:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.150 00:48:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.150 00:48:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.150 00:48:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.150 00:48:59 -- paths/export.sh@5 -- # export PATH 00:14:47.150 00:48:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.150 00:48:59 -- nvmf/common.sh@46 -- # : 0 00:14:47.150 00:48:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.150 00:48:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.150 00:48:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.150 00:48:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.150 00:48:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.150 00:48:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.150 00:48:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.150 00:48:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.150 00:48:59 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.150 00:48:59 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.150 00:48:59 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:47.150 00:48:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.150 00:48:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.150 00:48:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.150 00:48:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.150 00:48:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.150 00:48:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.150 00:48:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.150 00:48:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.150 00:48:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:47.150 00:48:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:47.150 00:48:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:47.150 00:48:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:47.150 00:48:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:47.150 00:48:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:47.150 00:48:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.150 00:48:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.150 00:48:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:47.150 00:48:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:47.150 00:48:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.150 00:48:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.150 00:48:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.150 00:48:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.150 00:48:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.150 00:48:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.150 00:48:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.150 00:48:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.150 00:48:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:47.150 00:48:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:47.150 Cannot find device "nvmf_tgt_br" 00:14:47.150 00:48:59 -- nvmf/common.sh@154 -- # true 00:14:47.150 00:48:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.150 Cannot find device "nvmf_tgt_br2" 00:14:47.150 00:48:59 -- nvmf/common.sh@155 -- # true 00:14:47.150 00:48:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:47.150 00:48:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:47.150 Cannot find device "nvmf_tgt_br" 00:14:47.150 00:48:59 -- nvmf/common.sh@157 -- # true 00:14:47.150 00:48:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:47.150 Cannot find device "nvmf_tgt_br2" 00:14:47.150 00:48:59 -- nvmf/common.sh@158 -- # true 00:14:47.150 00:48:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:47.409 00:48:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:47.409 00:48:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.409 00:48:59 -- nvmf/common.sh@161 -- # true 00:14:47.409 00:48:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.409 00:48:59 -- nvmf/common.sh@162 -- # true 00:14:47.409 00:48:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.409 00:48:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.409 00:48:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.409 00:48:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.409 00:48:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.409 00:48:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.409 00:48:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.409 00:48:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:47.409 00:48:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:47.409 00:48:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:47.409 00:48:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:47.409 00:48:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:47.409 00:48:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:47.409 00:48:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.409 00:48:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.409 00:48:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.409 00:48:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:47.409 00:48:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:47.409 00:48:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.409 00:48:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.409 00:48:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:47.409 00:48:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:47.409 00:48:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.409 00:48:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:47.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:47.409 00:14:47.409 --- 10.0.0.2 ping statistics --- 00:14:47.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.409 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:47.409 00:48:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:47.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:47.669 00:14:47.669 --- 10.0.0.3 ping statistics --- 00:14:47.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.669 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:47.669 00:48:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:47.669 00:14:47.669 --- 10.0.0.1 ping statistics --- 00:14:47.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.669 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:47.669 00:48:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.669 00:48:59 -- nvmf/common.sh@421 -- # return 0 00:14:47.669 00:48:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:47.669 00:48:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.669 00:48:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:47.669 00:48:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:47.669 00:48:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.669 00:48:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:47.669 00:48:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:47.669 00:48:59 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:47.669 00:48:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:47.669 00:48:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.669 00:48:59 -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 00:48:59 -- nvmf/common.sh@469 -- # nvmfpid=84979 00:14:47.669 00:48:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:47.669 00:48:59 -- nvmf/common.sh@470 -- # waitforlisten 84979 00:14:47.669 00:48:59 -- common/autotest_common.sh@829 -- # '[' -z 84979 ']' 00:14:47.669 00:48:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.669 00:48:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.669 00:48:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.669 00:48:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.669 00:48:59 -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 [2024-12-03 00:49:00.016918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:47.669 [2024-12-03 00:49:00.017005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.669 [2024-12-03 00:49:00.160035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.928 [2024-12-03 00:49:00.254882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.928 [2024-12-03 00:49:00.255098] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.928 [2024-12-03 00:49:00.255117] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.928 [2024-12-03 00:49:00.255130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.928 [2024-12-03 00:49:00.255325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.928 [2024-12-03 00:49:00.255396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.928 [2024-12-03 00:49:00.255670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.928 [2024-12-03 00:49:00.255684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.495 00:49:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.495 00:49:00 -- common/autotest_common.sh@862 -- # return 0 00:14:48.495 00:49:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.495 00:49:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.495 00:49:00 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 00:49:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:48.753 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.753 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:48.753 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.753 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.753 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.753 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 [2024-12-03 00:49:01.113725] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.753 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:48.753 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.753 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 Malloc0 00:14:48.753 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.753 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.753 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.753 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.753 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.753 00:49:01 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.754 00:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.754 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.754 [2024-12-03 00:49:01.182780] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.754 00:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=85032 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@30 -- # READ_PID=85034 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # config=() 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.754 00:49:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.754 { 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme$subsystem", 00:14:48.754 "trtype": "$TEST_TRANSPORT", 00:14:48.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "$NVMF_PORT", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.754 "hdgst": ${hdgst:-false}, 00:14:48.754 "ddgst": ${ddgst:-false} 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 } 00:14:48.754 EOF 00:14:48.754 )") 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=85036 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # config=() 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # cat 00:14:48.754 00:49:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=85038 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.754 { 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme$subsystem", 00:14:48.754 "trtype": "$TEST_TRANSPORT", 00:14:48.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "$NVMF_PORT", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.754 "hdgst": ${hdgst:-false}, 00:14:48.754 "ddgst": ${ddgst:-false} 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 } 00:14:48.754 EOF 00:14:48.754 )") 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@35 -- # sync 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # cat 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # config=() 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.754 00:49:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.754 { 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme$subsystem", 00:14:48.754 "trtype": "$TEST_TRANSPORT", 00:14:48.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "$NVMF_PORT", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.754 "hdgst": ${hdgst:-false}, 00:14:48.754 "ddgst": ${ddgst:-false} 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 } 00:14:48.754 EOF 00:14:48.754 )") 00:14:48.754 00:49:01 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # config=() 00:14:48.754 00:49:01 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.754 00:49:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.754 00:49:01 -- nvmf/common.sh@544 -- # jq . 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.754 { 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme$subsystem", 00:14:48.754 "trtype": "$TEST_TRANSPORT", 00:14:48.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "$NVMF_PORT", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.754 "hdgst": ${hdgst:-false}, 00:14:48.754 "ddgst": ${ddgst:-false} 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 } 00:14:48.754 EOF 00:14:48.754 )") 00:14:48.754 00:49:01 -- nvmf/common.sh@544 -- # jq . 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # cat 00:14:48.754 00:49:01 -- nvmf/common.sh@542 -- # cat 00:14:48.754 00:49:01 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.754 00:49:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme1", 00:14:48.754 "trtype": "tcp", 00:14:48.754 "traddr": "10.0.0.2", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "4420", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.754 "hdgst": false, 00:14:48.754 "ddgst": false 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 }' 00:14:48.754 00:49:01 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.754 00:49:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme1", 00:14:48.754 "trtype": "tcp", 00:14:48.754 "traddr": "10.0.0.2", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "4420", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.754 "hdgst": false, 00:14:48.754 "ddgst": false 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 }' 00:14:48.754 00:49:01 -- nvmf/common.sh@544 -- # jq . 00:14:48.754 00:49:01 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.754 00:49:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme1", 00:14:48.754 "trtype": "tcp", 00:14:48.754 "traddr": "10.0.0.2", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "4420", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.754 "hdgst": false, 00:14:48.754 "ddgst": false 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 }' 00:14:48.754 00:49:01 -- nvmf/common.sh@544 -- # jq . 00:14:48.754 00:49:01 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.754 00:49:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.754 "params": { 00:14:48.754 "name": "Nvme1", 00:14:48.754 "trtype": "tcp", 00:14:48.754 "traddr": "10.0.0.2", 00:14:48.754 "adrfam": "ipv4", 00:14:48.754 "trsvcid": "4420", 00:14:48.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.754 "hdgst": false, 00:14:48.754 "ddgst": false 00:14:48.754 }, 00:14:48.754 "method": "bdev_nvme_attach_controller" 00:14:48.754 }' 00:14:48.754 [2024-12-03 00:49:01.243325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.754 [2024-12-03 00:49:01.243424] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:48.754 [2024-12-03 00:49:01.261004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.754 [2024-12-03 00:49:01.261502] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:48.754 [2024-12-03 00:49:01.262491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.754 [2024-12-03 00:49:01.262576] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:49.013 00:49:01 -- target/bdev_io_wait.sh@37 -- # wait 85032 00:14:49.013 [2024-12-03 00:49:01.280801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:49.013 [2024-12-03 00:49:01.280908] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:49.013 [2024-12-03 00:49:01.454523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.013 [2024-12-03 00:49:01.524810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:49.271 [2024-12-03 00:49:01.563277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.271 [2024-12-03 00:49:01.629101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.271 [2024-12-03 00:49:01.641944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:49.271 [2024-12-03 00:49:01.705141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:49.271 [2024-12-03 00:49:01.708784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.271 Running I/O for 1 seconds... 00:14:49.271 Running I/O for 1 seconds... 00:14:49.530 [2024-12-03 00:49:01.803531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.530 Running I/O for 1 seconds... 00:14:49.530 Running I/O for 1 seconds... 00:14:50.466 00:14:50.466 Latency(us) 00:14:50.466 [2024-12-03T00:49:02.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.466 [2024-12-03T00:49:02.981Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:50.466 Nvme1n1 : 1.01 8910.39 34.81 0.00 0.00 14305.23 7626.01 20852.36 00:14:50.466 [2024-12-03T00:49:02.981Z] =================================================================================================================== 00:14:50.466 [2024-12-03T00:49:02.981Z] Total : 8910.39 34.81 0.00 0.00 14305.23 7626.01 20852.36 00:14:50.466 00:14:50.466 Latency(us) 00:14:50.466 [2024-12-03T00:49:02.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.466 [2024-12-03T00:49:02.981Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:50.466 Nvme1n1 : 1.01 5579.75 21.80 0.00 0.00 22774.81 6136.55 26810.18 00:14:50.466 [2024-12-03T00:49:02.981Z] =================================================================================================================== 00:14:50.466 [2024-12-03T00:49:02.981Z] Total : 5579.75 21.80 0.00 0.00 22774.81 6136.55 26810.18 00:14:50.466 00:14:50.466 Latency(us) 00:14:50.466 [2024-12-03T00:49:02.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.466 [2024-12-03T00:49:02.981Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:50.466 Nvme1n1 : 1.01 7223.77 28.22 0.00 0.00 17652.75 2949.12 26095.24 00:14:50.466 [2024-12-03T00:49:02.981Z] =================================================================================================================== 00:14:50.466 [2024-12-03T00:49:02.982Z] Total : 7223.77 28.22 0.00 0.00 17652.75 2949.12 26095.24 00:14:50.467 00:49:02 -- target/bdev_io_wait.sh@38 -- # wait 85034 00:14:50.726 00:14:50.726 Latency(us) 00:14:50.726 [2024-12-03T00:49:03.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.726 [2024-12-03T00:49:03.241Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:50.726 Nvme1n1 : 1.00 236775.36 924.90 0.00 0.00 538.28 218.76 845.27 00:14:50.726 [2024-12-03T00:49:03.241Z] =================================================================================================================== 00:14:50.726 [2024-12-03T00:49:03.241Z] Total : 236775.36 924.90 0.00 0.00 538.28 218.76 845.27 00:14:50.726 00:49:03 -- target/bdev_io_wait.sh@39 -- # wait 85036 00:14:50.985 00:49:03 -- target/bdev_io_wait.sh@40 -- # wait 85038 00:14:50.985 00:49:03 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.985 00:49:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.985 00:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 00:49:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.985 00:49:03 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:50.985 00:49:03 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:50.985 00:49:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.985 00:49:03 -- nvmf/common.sh@116 -- # sync 00:14:50.985 00:49:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:50.985 00:49:03 -- nvmf/common.sh@119 -- # set +e 00:14:50.985 00:49:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:50.985 00:49:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:50.985 rmmod nvme_tcp 00:14:50.985 rmmod nvme_fabrics 00:14:50.985 rmmod nvme_keyring 00:14:50.985 00:49:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:50.985 00:49:03 -- nvmf/common.sh@123 -- # set -e 00:14:50.985 00:49:03 -- nvmf/common.sh@124 -- # return 0 00:14:50.985 00:49:03 -- nvmf/common.sh@477 -- # '[' -n 84979 ']' 00:14:50.985 00:49:03 -- nvmf/common.sh@478 -- # killprocess 84979 00:14:50.985 00:49:03 -- common/autotest_common.sh@936 -- # '[' -z 84979 ']' 00:14:50.985 00:49:03 -- common/autotest_common.sh@940 -- # kill -0 84979 00:14:50.985 00:49:03 -- common/autotest_common.sh@941 -- # uname 00:14:50.985 00:49:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.985 00:49:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84979 00:14:50.985 00:49:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.985 00:49:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.985 killing process with pid 84979 00:14:50.985 00:49:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84979' 00:14:50.985 00:49:03 -- common/autotest_common.sh@955 -- # kill 84979 00:14:50.985 00:49:03 -- common/autotest_common.sh@960 -- # wait 84979 00:14:51.244 00:49:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.244 00:49:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.244 00:49:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.244 00:49:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.244 00:49:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.244 00:49:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.244 00:49:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.244 00:49:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.244 00:49:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:51.244 00:14:51.244 real 0m4.349s 00:14:51.244 user 0m18.549s 00:14:51.244 sys 0m2.169s 00:14:51.244 00:49:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.244 00:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:51.244 ************************************ 00:14:51.244 END TEST nvmf_bdev_io_wait 00:14:51.244 ************************************ 00:14:51.244 00:49:03 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.244 00:49:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.244 00:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.244 00:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:51.503 ************************************ 00:14:51.503 START TEST nvmf_queue_depth 00:14:51.503 ************************************ 00:14:51.503 00:49:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.503 * Looking for test storage... 00:14:51.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.503 00:49:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:51.503 00:49:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:51.503 00:49:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:51.503 00:49:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:51.503 00:49:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:51.503 00:49:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:51.503 00:49:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:51.503 00:49:03 -- scripts/common.sh@335 -- # IFS=.-: 00:14:51.503 00:49:03 -- scripts/common.sh@335 -- # read -ra ver1 00:14:51.503 00:49:03 -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.503 00:49:03 -- scripts/common.sh@336 -- # read -ra ver2 00:14:51.503 00:49:03 -- scripts/common.sh@337 -- # local 'op=<' 00:14:51.503 00:49:03 -- scripts/common.sh@339 -- # ver1_l=2 00:14:51.503 00:49:03 -- scripts/common.sh@340 -- # ver2_l=1 00:14:51.503 00:49:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:51.503 00:49:03 -- scripts/common.sh@343 -- # case "$op" in 00:14:51.503 00:49:03 -- scripts/common.sh@344 -- # : 1 00:14:51.503 00:49:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:51.503 00:49:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.503 00:49:03 -- scripts/common.sh@364 -- # decimal 1 00:14:51.503 00:49:03 -- scripts/common.sh@352 -- # local d=1 00:14:51.503 00:49:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.503 00:49:03 -- scripts/common.sh@354 -- # echo 1 00:14:51.503 00:49:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:51.503 00:49:03 -- scripts/common.sh@365 -- # decimal 2 00:14:51.503 00:49:03 -- scripts/common.sh@352 -- # local d=2 00:14:51.503 00:49:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.503 00:49:03 -- scripts/common.sh@354 -- # echo 2 00:14:51.503 00:49:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:51.503 00:49:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:51.503 00:49:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:51.503 00:49:03 -- scripts/common.sh@367 -- # return 0 00:14:51.503 00:49:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.504 00:49:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.504 --rc genhtml_branch_coverage=1 00:14:51.504 --rc genhtml_function_coverage=1 00:14:51.504 --rc genhtml_legend=1 00:14:51.504 --rc geninfo_all_blocks=1 00:14:51.504 --rc geninfo_unexecuted_blocks=1 00:14:51.504 00:14:51.504 ' 00:14:51.504 00:49:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.504 --rc genhtml_branch_coverage=1 00:14:51.504 --rc genhtml_function_coverage=1 00:14:51.504 --rc genhtml_legend=1 00:14:51.504 --rc geninfo_all_blocks=1 00:14:51.504 --rc geninfo_unexecuted_blocks=1 00:14:51.504 00:14:51.504 ' 00:14:51.504 00:49:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.504 --rc genhtml_branch_coverage=1 00:14:51.504 --rc genhtml_function_coverage=1 00:14:51.504 --rc genhtml_legend=1 00:14:51.504 --rc geninfo_all_blocks=1 00:14:51.504 --rc geninfo_unexecuted_blocks=1 00:14:51.504 00:14:51.504 ' 00:14:51.504 00:49:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:51.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.504 --rc genhtml_branch_coverage=1 00:14:51.504 --rc genhtml_function_coverage=1 00:14:51.504 --rc genhtml_legend=1 00:14:51.504 --rc geninfo_all_blocks=1 00:14:51.504 --rc geninfo_unexecuted_blocks=1 00:14:51.504 00:14:51.504 ' 00:14:51.504 00:49:03 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.504 00:49:03 -- nvmf/common.sh@7 -- # uname -s 00:14:51.504 00:49:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.504 00:49:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.504 00:49:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.504 00:49:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.504 00:49:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.504 00:49:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.504 00:49:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.504 00:49:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.504 00:49:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.504 00:49:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.504 00:49:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:14:51.504 00:49:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:14:51.504 00:49:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.504 00:49:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.504 00:49:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.504 00:49:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.504 00:49:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.504 00:49:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.504 00:49:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.504 00:49:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.504 00:49:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.504 00:49:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.504 00:49:03 -- paths/export.sh@5 -- # export PATH 00:14:51.504 00:49:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.504 00:49:03 -- nvmf/common.sh@46 -- # : 0 00:14:51.504 00:49:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.504 00:49:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.504 00:49:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.504 00:49:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.504 00:49:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.504 00:49:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.504 00:49:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.504 00:49:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.504 00:49:03 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:51.504 00:49:03 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:51.504 00:49:03 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.504 00:49:03 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:51.504 00:49:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:51.504 00:49:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.504 00:49:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.504 00:49:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.504 00:49:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.504 00:49:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.504 00:49:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.504 00:49:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.504 00:49:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:51.504 00:49:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:51.504 00:49:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:51.504 00:49:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:51.504 00:49:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:51.504 00:49:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:51.504 00:49:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.504 00:49:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.504 00:49:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:51.504 00:49:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:51.504 00:49:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.504 00:49:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.504 00:49:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.504 00:49:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.504 00:49:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.504 00:49:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.504 00:49:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.504 00:49:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.504 00:49:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:51.504 00:49:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:51.504 Cannot find device "nvmf_tgt_br" 00:14:51.504 00:49:03 -- nvmf/common.sh@154 -- # true 00:14:51.504 00:49:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.504 Cannot find device "nvmf_tgt_br2" 00:14:51.504 00:49:03 -- nvmf/common.sh@155 -- # true 00:14:51.504 00:49:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:51.504 00:49:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:51.504 Cannot find device "nvmf_tgt_br" 00:14:51.504 00:49:04 -- nvmf/common.sh@157 -- # true 00:14:51.504 00:49:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:51.504 Cannot find device "nvmf_tgt_br2" 00:14:51.504 00:49:04 -- nvmf/common.sh@158 -- # true 00:14:51.504 00:49:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:51.768 00:49:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:51.768 00:49:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.768 00:49:04 -- nvmf/common.sh@161 -- # true 00:14:51.768 00:49:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.768 00:49:04 -- nvmf/common.sh@162 -- # true 00:14:51.768 00:49:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.768 00:49:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.768 00:49:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.768 00:49:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.768 00:49:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.768 00:49:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.768 00:49:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.768 00:49:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.768 00:49:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.768 00:49:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:51.768 00:49:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:51.768 00:49:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:51.768 00:49:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:51.768 00:49:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.768 00:49:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.768 00:49:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.768 00:49:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:51.768 00:49:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:51.768 00:49:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.768 00:49:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.768 00:49:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.768 00:49:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.768 00:49:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.768 00:49:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:51.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:51.768 00:14:51.768 --- 10.0.0.2 ping statistics --- 00:14:51.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.768 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:51.768 00:49:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:51.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:51.768 00:14:51.768 --- 10.0.0.3 ping statistics --- 00:14:51.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.768 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:51.768 00:49:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:51.768 00:14:51.768 --- 10.0.0.1 ping statistics --- 00:14:51.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.768 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:51.768 00:49:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.768 00:49:04 -- nvmf/common.sh@421 -- # return 0 00:14:51.768 00:49:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:51.768 00:49:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.768 00:49:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:51.768 00:49:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:51.768 00:49:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.768 00:49:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:51.768 00:49:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:52.027 00:49:04 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:52.027 00:49:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.027 00:49:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:52.027 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 00:49:04 -- nvmf/common.sh@469 -- # nvmfpid=85279 00:14:52.027 00:49:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.027 00:49:04 -- nvmf/common.sh@470 -- # waitforlisten 85279 00:14:52.027 00:49:04 -- common/autotest_common.sh@829 -- # '[' -z 85279 ']' 00:14:52.027 00:49:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.027 00:49:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.027 00:49:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.027 00:49:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.027 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 [2024-12-03 00:49:04.353906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:52.027 [2024-12-03 00:49:04.353991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.027 [2024-12-03 00:49:04.497112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.285 [2024-12-03 00:49:04.568454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.285 [2024-12-03 00:49:04.568629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.285 [2024-12-03 00:49:04.568646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.285 [2024-12-03 00:49:04.568658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.285 [2024-12-03 00:49:04.568692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.218 00:49:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.218 00:49:05 -- common/autotest_common.sh@862 -- # return 0 00:14:53.218 00:49:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.218 00:49:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 00:49:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.218 00:49:05 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.218 00:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 [2024-12-03 00:49:05.451399] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.218 00:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.218 00:49:05 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.218 00:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 Malloc0 00:14:53.218 00:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.218 00:49:05 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.218 00:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 00:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.218 00:49:05 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.218 00:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 00:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.218 00:49:05 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.218 00:49:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 [2024-12-03 00:49:05.513716] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.218 00:49:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.218 00:49:05 -- target/queue_depth.sh@30 -- # bdevperf_pid=85329 00:14:53.218 00:49:05 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:53.218 00:49:05 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.218 00:49:05 -- target/queue_depth.sh@33 -- # waitforlisten 85329 /var/tmp/bdevperf.sock 00:14:53.218 00:49:05 -- common/autotest_common.sh@829 -- # '[' -z 85329 ']' 00:14:53.218 00:49:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.218 00:49:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.218 00:49:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.218 00:49:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.218 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 [2024-12-03 00:49:05.573939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.218 [2024-12-03 00:49:05.574030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85329 ] 00:14:53.218 [2024-12-03 00:49:05.713885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.475 [2024-12-03 00:49:05.789918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.408 00:49:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.408 00:49:06 -- common/autotest_common.sh@862 -- # return 0 00:14:54.408 00:49:06 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:54.408 00:49:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.408 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:14:54.408 NVMe0n1 00:14:54.408 00:49:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.408 00:49:06 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.408 Running I/O for 10 seconds... 00:15:04.374 00:15:04.374 Latency(us) 00:15:04.374 [2024-12-03T00:49:16.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.374 [2024-12-03T00:49:16.889Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:04.374 Verification LBA range: start 0x0 length 0x4000 00:15:04.374 NVMe0n1 : 10.05 17258.37 67.42 0.00 0.00 59153.56 11558.17 46947.61 00:15:04.374 [2024-12-03T00:49:16.889Z] =================================================================================================================== 00:15:04.374 [2024-12-03T00:49:16.889Z] Total : 17258.37 67.42 0.00 0.00 59153.56 11558.17 46947.61 00:15:04.374 0 00:15:04.374 00:49:16 -- target/queue_depth.sh@39 -- # killprocess 85329 00:15:04.374 00:49:16 -- common/autotest_common.sh@936 -- # '[' -z 85329 ']' 00:15:04.374 00:49:16 -- common/autotest_common.sh@940 -- # kill -0 85329 00:15:04.374 00:49:16 -- common/autotest_common.sh@941 -- # uname 00:15:04.374 00:49:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.374 00:49:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85329 00:15:04.374 00:49:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.374 00:49:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.374 killing process with pid 85329 00:15:04.374 00:49:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85329' 00:15:04.374 Received shutdown signal, test time was about 10.000000 seconds 00:15:04.374 00:15:04.374 Latency(us) 00:15:04.374 [2024-12-03T00:49:16.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.374 [2024-12-03T00:49:16.889Z] =================================================================================================================== 00:15:04.374 [2024-12-03T00:49:16.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.374 00:49:16 -- common/autotest_common.sh@955 -- # kill 85329 00:15:04.374 00:49:16 -- common/autotest_common.sh@960 -- # wait 85329 00:15:04.632 00:49:17 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:04.632 00:49:17 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:04.632 00:49:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:04.632 00:49:17 -- nvmf/common.sh@116 -- # sync 00:15:04.892 00:49:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:04.892 00:49:17 -- nvmf/common.sh@119 -- # set +e 00:15:04.892 00:49:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:04.892 00:49:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:04.892 rmmod nvme_tcp 00:15:04.892 rmmod nvme_fabrics 00:15:04.892 rmmod nvme_keyring 00:15:04.892 00:49:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:04.892 00:49:17 -- nvmf/common.sh@123 -- # set -e 00:15:04.892 00:49:17 -- nvmf/common.sh@124 -- # return 0 00:15:04.892 00:49:17 -- nvmf/common.sh@477 -- # '[' -n 85279 ']' 00:15:04.892 00:49:17 -- nvmf/common.sh@478 -- # killprocess 85279 00:15:04.892 00:49:17 -- common/autotest_common.sh@936 -- # '[' -z 85279 ']' 00:15:04.892 00:49:17 -- common/autotest_common.sh@940 -- # kill -0 85279 00:15:04.892 00:49:17 -- common/autotest_common.sh@941 -- # uname 00:15:04.892 00:49:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.892 00:49:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85279 00:15:04.892 00:49:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:04.892 00:49:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:04.892 killing process with pid 85279 00:15:04.892 00:49:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85279' 00:15:04.892 00:49:17 -- common/autotest_common.sh@955 -- # kill 85279 00:15:04.892 00:49:17 -- common/autotest_common.sh@960 -- # wait 85279 00:15:05.150 00:49:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.150 00:49:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.150 00:49:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.150 00:49:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.150 00:49:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.150 00:49:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.150 00:49:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.150 00:49:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.150 00:49:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:05.150 ************************************ 00:15:05.150 END TEST nvmf_queue_depth 00:15:05.150 ************************************ 00:15:05.150 00:15:05.150 real 0m13.737s 00:15:05.150 user 0m22.804s 00:15:05.150 sys 0m2.667s 00:15:05.150 00:49:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:05.150 00:49:17 -- common/autotest_common.sh@10 -- # set +x 00:15:05.150 00:49:17 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.150 00:49:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:05.150 00:49:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.150 00:49:17 -- common/autotest_common.sh@10 -- # set +x 00:15:05.150 ************************************ 00:15:05.150 START TEST nvmf_multipath 00:15:05.150 ************************************ 00:15:05.150 00:49:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.150 * Looking for test storage... 00:15:05.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:05.150 00:49:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:05.150 00:49:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:05.150 00:49:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:05.408 00:49:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:05.408 00:49:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:05.408 00:49:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:05.408 00:49:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:05.408 00:49:17 -- scripts/common.sh@335 -- # IFS=.-: 00:15:05.408 00:49:17 -- scripts/common.sh@335 -- # read -ra ver1 00:15:05.408 00:49:17 -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.408 00:49:17 -- scripts/common.sh@336 -- # read -ra ver2 00:15:05.408 00:49:17 -- scripts/common.sh@337 -- # local 'op=<' 00:15:05.408 00:49:17 -- scripts/common.sh@339 -- # ver1_l=2 00:15:05.408 00:49:17 -- scripts/common.sh@340 -- # ver2_l=1 00:15:05.408 00:49:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:05.408 00:49:17 -- scripts/common.sh@343 -- # case "$op" in 00:15:05.408 00:49:17 -- scripts/common.sh@344 -- # : 1 00:15:05.408 00:49:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:05.408 00:49:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.408 00:49:17 -- scripts/common.sh@364 -- # decimal 1 00:15:05.408 00:49:17 -- scripts/common.sh@352 -- # local d=1 00:15:05.408 00:49:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.408 00:49:17 -- scripts/common.sh@354 -- # echo 1 00:15:05.408 00:49:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:05.408 00:49:17 -- scripts/common.sh@365 -- # decimal 2 00:15:05.408 00:49:17 -- scripts/common.sh@352 -- # local d=2 00:15:05.408 00:49:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.408 00:49:17 -- scripts/common.sh@354 -- # echo 2 00:15:05.408 00:49:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:05.408 00:49:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:05.408 00:49:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:05.408 00:49:17 -- scripts/common.sh@367 -- # return 0 00:15:05.408 00:49:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.408 00:49:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.408 --rc genhtml_branch_coverage=1 00:15:05.408 --rc genhtml_function_coverage=1 00:15:05.408 --rc genhtml_legend=1 00:15:05.408 --rc geninfo_all_blocks=1 00:15:05.408 --rc geninfo_unexecuted_blocks=1 00:15:05.408 00:15:05.408 ' 00:15:05.408 00:49:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.408 --rc genhtml_branch_coverage=1 00:15:05.408 --rc genhtml_function_coverage=1 00:15:05.408 --rc genhtml_legend=1 00:15:05.408 --rc geninfo_all_blocks=1 00:15:05.408 --rc geninfo_unexecuted_blocks=1 00:15:05.408 00:15:05.408 ' 00:15:05.408 00:49:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.408 --rc genhtml_branch_coverage=1 00:15:05.408 --rc genhtml_function_coverage=1 00:15:05.408 --rc genhtml_legend=1 00:15:05.408 --rc geninfo_all_blocks=1 00:15:05.408 --rc geninfo_unexecuted_blocks=1 00:15:05.408 00:15:05.408 ' 00:15:05.408 00:49:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.408 --rc genhtml_branch_coverage=1 00:15:05.408 --rc genhtml_function_coverage=1 00:15:05.408 --rc genhtml_legend=1 00:15:05.408 --rc geninfo_all_blocks=1 00:15:05.408 --rc geninfo_unexecuted_blocks=1 00:15:05.408 00:15:05.408 ' 00:15:05.408 00:49:17 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.408 00:49:17 -- nvmf/common.sh@7 -- # uname -s 00:15:05.408 00:49:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.408 00:49:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.408 00:49:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.408 00:49:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.408 00:49:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.408 00:49:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.408 00:49:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.408 00:49:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.408 00:49:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.408 00:49:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.408 00:49:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:05.408 00:49:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:05.408 00:49:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.408 00:49:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.408 00:49:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.408 00:49:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.408 00:49:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.408 00:49:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.408 00:49:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.408 00:49:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.408 00:49:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.408 00:49:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.408 00:49:17 -- paths/export.sh@5 -- # export PATH 00:15:05.408 00:49:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.408 00:49:17 -- nvmf/common.sh@46 -- # : 0 00:15:05.408 00:49:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.408 00:49:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.408 00:49:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.408 00:49:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.408 00:49:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.408 00:49:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.408 00:49:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.408 00:49:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.408 00:49:17 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.408 00:49:17 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.408 00:49:17 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:05.409 00:49:17 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.409 00:49:17 -- target/multipath.sh@43 -- # nvmftestinit 00:15:05.409 00:49:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.409 00:49:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.409 00:49:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.409 00:49:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.409 00:49:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.409 00:49:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.409 00:49:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.409 00:49:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.409 00:49:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:05.409 00:49:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:05.409 00:49:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:05.409 00:49:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:05.409 00:49:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:05.409 00:49:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:05.409 00:49:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.409 00:49:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.409 00:49:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.409 00:49:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:05.409 00:49:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.409 00:49:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.409 00:49:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.409 00:49:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.409 00:49:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.409 00:49:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.409 00:49:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.409 00:49:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.409 00:49:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:05.409 00:49:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:05.409 Cannot find device "nvmf_tgt_br" 00:15:05.409 00:49:17 -- nvmf/common.sh@154 -- # true 00:15:05.409 00:49:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.409 Cannot find device "nvmf_tgt_br2" 00:15:05.409 00:49:17 -- nvmf/common.sh@155 -- # true 00:15:05.409 00:49:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:05.409 00:49:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:05.409 Cannot find device "nvmf_tgt_br" 00:15:05.409 00:49:17 -- nvmf/common.sh@157 -- # true 00:15:05.409 00:49:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:05.409 Cannot find device "nvmf_tgt_br2" 00:15:05.409 00:49:17 -- nvmf/common.sh@158 -- # true 00:15:05.409 00:49:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:05.409 00:49:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:05.409 00:49:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.409 00:49:17 -- nvmf/common.sh@161 -- # true 00:15:05.409 00:49:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.667 00:49:17 -- nvmf/common.sh@162 -- # true 00:15:05.667 00:49:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.667 00:49:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.667 00:49:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.667 00:49:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.667 00:49:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.667 00:49:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.667 00:49:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.667 00:49:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.667 00:49:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:05.667 00:49:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:05.667 00:49:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:05.667 00:49:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:05.667 00:49:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:05.667 00:49:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.667 00:49:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.667 00:49:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.667 00:49:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:05.667 00:49:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:05.667 00:49:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.667 00:49:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.667 00:49:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.667 00:49:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.667 00:49:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.667 00:49:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:05.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:05.667 00:15:05.667 --- 10.0.0.2 ping statistics --- 00:15:05.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.667 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:05.667 00:49:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:05.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:15:05.667 00:15:05.667 --- 10.0.0.3 ping statistics --- 00:15:05.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.667 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:05.667 00:49:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:05.667 00:15:05.667 --- 10.0.0.1 ping statistics --- 00:15:05.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.667 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:05.667 00:49:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.667 00:49:18 -- nvmf/common.sh@421 -- # return 0 00:15:05.667 00:49:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:05.667 00:49:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.667 00:49:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:05.667 00:49:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:05.667 00:49:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.667 00:49:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:05.667 00:49:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:05.667 00:49:18 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:05.667 00:49:18 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:05.667 00:49:18 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:05.667 00:49:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:05.667 00:49:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.667 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:05.667 00:49:18 -- nvmf/common.sh@469 -- # nvmfpid=85666 00:15:05.667 00:49:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.667 00:49:18 -- nvmf/common.sh@470 -- # waitforlisten 85666 00:15:05.667 00:49:18 -- common/autotest_common.sh@829 -- # '[' -z 85666 ']' 00:15:05.667 00:49:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.667 00:49:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.667 00:49:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.667 00:49:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.667 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 [2024-12-03 00:49:18.190053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:05.925 [2024-12-03 00:49:18.190370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.925 [2024-12-03 00:49:18.334224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.925 [2024-12-03 00:49:18.417903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.925 [2024-12-03 00:49:18.418114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.925 [2024-12-03 00:49:18.418132] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.925 [2024-12-03 00:49:18.418144] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.925 [2024-12-03 00:49:18.418304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.925 [2024-12-03 00:49:18.418718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.925 [2024-12-03 00:49:18.419596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.925 [2024-12-03 00:49:18.419626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.859 00:49:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.859 00:49:19 -- common/autotest_common.sh@862 -- # return 0 00:15:06.859 00:49:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:06.859 00:49:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.859 00:49:19 -- common/autotest_common.sh@10 -- # set +x 00:15:06.859 00:49:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.859 00:49:19 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.118 [2024-12-03 00:49:19.426776] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.118 00:49:19 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:07.376 Malloc0 00:15:07.376 00:49:19 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:07.634 00:49:19 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:07.634 00:49:20 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.893 [2024-12-03 00:49:20.322816] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.893 00:49:20 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:08.151 [2024-12-03 00:49:20.531050] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.151 00:49:20 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:08.410 00:49:20 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:08.669 00:49:20 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.669 00:49:20 -- common/autotest_common.sh@1187 -- # local i=0 00:15:08.669 00:49:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.669 00:49:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:08.669 00:49:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:10.572 00:49:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:10.572 00:49:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:10.572 00:49:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.572 00:49:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:10.572 00:49:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.572 00:49:23 -- common/autotest_common.sh@1197 -- # return 0 00:15:10.572 00:49:23 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:10.572 00:49:23 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:10.572 00:49:23 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:10.572 00:49:23 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:10.572 00:49:23 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:10.572 00:49:23 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:10.572 00:49:23 -- target/multipath.sh@38 -- # return 0 00:15:10.572 00:49:23 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:10.572 00:49:23 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:10.572 00:49:23 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:10.572 00:49:23 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:10.572 00:49:23 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:10.572 00:49:23 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:10.572 00:49:23 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:10.573 00:49:23 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:10.573 00:49:23 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.573 00:49:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:10.573 00:49:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:10.573 00:49:23 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:10.573 00:49:23 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:10.573 00:49:23 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:10.573 00:49:23 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.573 00:49:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:10.573 00:49:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:10.573 00:49:23 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:10.573 00:49:23 -- target/multipath.sh@85 -- # echo numa 00:15:10.573 00:49:23 -- target/multipath.sh@88 -- # fio_pid=85804 00:15:10.573 00:49:23 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:10.573 00:49:23 -- target/multipath.sh@90 -- # sleep 1 00:15:10.573 [global] 00:15:10.573 thread=1 00:15:10.573 invalidate=1 00:15:10.573 rw=randrw 00:15:10.573 time_based=1 00:15:10.573 runtime=6 00:15:10.573 ioengine=libaio 00:15:10.573 direct=1 00:15:10.573 bs=4096 00:15:10.573 iodepth=128 00:15:10.573 norandommap=0 00:15:10.573 numjobs=1 00:15:10.573 00:15:10.573 verify_dump=1 00:15:10.573 verify_backlog=512 00:15:10.573 verify_state_save=0 00:15:10.573 do_verify=1 00:15:10.573 verify=crc32c-intel 00:15:10.573 [job0] 00:15:10.573 filename=/dev/nvme0n1 00:15:10.573 Could not set queue depth (nvme0n1) 00:15:10.832 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.832 fio-3.35 00:15:10.832 Starting 1 thread 00:15:11.768 00:49:24 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:12.027 00:49:24 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:12.286 00:49:24 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:12.286 00:49:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:12.286 00:49:24 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.286 00:49:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:12.286 00:49:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:12.286 00:49:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.286 00:49:24 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:12.286 00:49:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:12.286 00:49:24 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.286 00:49:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:12.286 00:49:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.286 00:49:24 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.286 00:49:24 -- target/multipath.sh@25 -- # sleep 1s 00:15:13.221 00:49:25 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:13.221 00:49:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.221 00:49:25 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.221 00:49:25 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:13.479 00:49:25 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:13.738 00:49:26 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:13.739 00:49:26 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:13.739 00:49:26 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.739 00:49:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:13.739 00:49:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:13.739 00:49:26 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.739 00:49:26 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:13.739 00:49:26 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:13.739 00:49:26 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.739 00:49:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:13.739 00:49:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.739 00:49:26 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:13.739 00:49:26 -- target/multipath.sh@25 -- # sleep 1s 00:15:14.675 00:49:27 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:14.675 00:49:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.675 00:49:27 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:14.675 00:49:27 -- target/multipath.sh@104 -- # wait 85804 00:15:17.207 00:15:17.207 job0: (groupid=0, jobs=1): err= 0: pid=85830: Tue Dec 3 00:49:29 2024 00:15:17.207 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(312MiB/6004msec) 00:15:17.207 slat (nsec): min=1808, max=5950.5k, avg=43218.66, stdev=195225.97 00:15:17.207 clat (usec): min=694, max=12488, avg=6577.52, stdev=1029.83 00:15:17.207 lat (usec): min=727, max=12497, avg=6620.74, stdev=1036.63 00:15:17.207 clat percentiles (usec): 00:15:17.207 | 1.00th=[ 4146], 5.00th=[ 5145], 10.00th=[ 5473], 20.00th=[ 5800], 00:15:17.207 | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6783], 00:15:17.207 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8356], 00:15:17.207 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[10945], 99.95th=[11600], 00:15:17.207 | 99.99th=[12256] 00:15:17.207 bw ( KiB/s): min=13416, max=35176, per=53.16%, avg=28290.18, stdev=7368.13, samples=11 00:15:17.207 iops : min= 3354, max= 8794, avg=7072.73, stdev=1841.64, samples=11 00:15:17.207 write: IOPS=8024, BW=31.3MiB/s (32.9MB/s)(160MiB/5106msec); 0 zone resets 00:15:17.207 slat (usec): min=2, max=4937, avg=53.97, stdev=144.75 00:15:17.207 clat (usec): min=634, max=11219, avg=5760.89, stdev=849.22 00:15:17.207 lat (usec): min=689, max=11244, avg=5814.87, stdev=851.32 00:15:17.207 clat percentiles (usec): 00:15:17.207 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5211], 00:15:17.207 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 5932], 00:15:17.207 | 70.00th=[ 6063], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 6980], 00:15:17.207 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10159], 99.95th=[10683], 00:15:17.208 | 99.99th=[11076] 00:15:17.208 bw ( KiB/s): min=13928, max=34552, per=88.19%, avg=28308.36, stdev=7140.68, samples=11 00:15:17.208 iops : min= 3482, max= 8638, avg=7077.09, stdev=1785.17, samples=11 00:15:17.208 lat (usec) : 750=0.01%, 1000=0.01% 00:15:17.208 lat (msec) : 2=0.03%, 4=1.57%, 10=97.99%, 20=0.40% 00:15:17.208 cpu : usr=5.68%, sys=21.74%, ctx=7311, majf=0, minf=114 00:15:17.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:17.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.208 issued rwts: total=79877,40972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.208 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.208 00:15:17.208 Run status group 0 (all jobs): 00:15:17.208 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=312MiB (327MB), run=6004-6004msec 00:15:17.208 WRITE: bw=31.3MiB/s (32.9MB/s), 31.3MiB/s-31.3MiB/s (32.9MB/s-32.9MB/s), io=160MiB (168MB), run=5106-5106msec 00:15:17.208 00:15:17.208 Disk stats (read/write): 00:15:17.208 nvme0n1: ios=79095/39948, merge=0/0, ticks=486250/214277, in_queue=700527, util=98.61% 00:15:17.208 00:49:29 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:17.208 00:49:29 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:17.466 00:49:29 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:17.466 00:49:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:17.466 00:49:29 -- target/multipath.sh@22 -- # local timeout=20 00:15:17.466 00:49:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:17.466 00:49:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:17.466 00:49:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:17.466 00:49:29 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:17.466 00:49:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:17.466 00:49:29 -- target/multipath.sh@22 -- # local timeout=20 00:15:17.466 00:49:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:17.466 00:49:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:17.467 00:49:29 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:17.467 00:49:29 -- target/multipath.sh@25 -- # sleep 1s 00:15:18.413 00:49:30 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:18.413 00:49:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.413 00:49:30 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:18.413 00:49:30 -- target/multipath.sh@113 -- # echo round-robin 00:15:18.413 00:49:30 -- target/multipath.sh@116 -- # fio_pid=85957 00:15:18.413 00:49:30 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:18.413 00:49:30 -- target/multipath.sh@118 -- # sleep 1 00:15:18.413 [global] 00:15:18.413 thread=1 00:15:18.413 invalidate=1 00:15:18.413 rw=randrw 00:15:18.413 time_based=1 00:15:18.413 runtime=6 00:15:18.413 ioengine=libaio 00:15:18.413 direct=1 00:15:18.413 bs=4096 00:15:18.413 iodepth=128 00:15:18.413 norandommap=0 00:15:18.413 numjobs=1 00:15:18.413 00:15:18.692 verify_dump=1 00:15:18.692 verify_backlog=512 00:15:18.692 verify_state_save=0 00:15:18.692 do_verify=1 00:15:18.692 verify=crc32c-intel 00:15:18.692 [job0] 00:15:18.692 filename=/dev/nvme0n1 00:15:18.692 Could not set queue depth (nvme0n1) 00:15:18.692 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.692 fio-3.35 00:15:18.692 Starting 1 thread 00:15:19.643 00:49:31 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:19.901 00:49:32 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:20.160 00:49:32 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:20.160 00:49:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:20.160 00:49:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:20.160 00:49:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:20.160 00:49:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:20.160 00:49:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:20.160 00:49:32 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:20.160 00:49:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:20.160 00:49:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:20.160 00:49:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:20.160 00:49:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.160 00:49:32 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:20.160 00:49:32 -- target/multipath.sh@25 -- # sleep 1s 00:15:21.093 00:49:33 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:21.093 00:49:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.093 00:49:33 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:21.093 00:49:33 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:21.351 00:49:33 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:21.609 00:49:33 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:21.609 00:49:33 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:21.610 00:49:33 -- target/multipath.sh@22 -- # local timeout=20 00:15:21.610 00:49:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:21.610 00:49:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:21.610 00:49:33 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:21.610 00:49:33 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:21.610 00:49:33 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:21.610 00:49:33 -- target/multipath.sh@22 -- # local timeout=20 00:15:21.610 00:49:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:21.610 00:49:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.610 00:49:33 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:21.610 00:49:33 -- target/multipath.sh@25 -- # sleep 1s 00:15:22.545 00:49:34 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:22.545 00:49:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:22.545 00:49:34 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:22.545 00:49:34 -- target/multipath.sh@132 -- # wait 85957 00:15:25.075 00:15:25.075 job0: (groupid=0, jobs=1): err= 0: pid=85978: Tue Dec 3 00:49:37 2024 00:15:25.075 read: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(309MiB/6005msec) 00:15:25.075 slat (usec): min=2, max=6269, avg=38.06, stdev=180.62 00:15:25.075 clat (usec): min=390, max=17489, avg=6723.51, stdev=1699.96 00:15:25.075 lat (usec): min=405, max=17504, avg=6761.57, stdev=1702.94 00:15:25.075 clat percentiles (usec): 00:15:25.075 | 1.00th=[ 2409], 5.00th=[ 4015], 10.00th=[ 5014], 20.00th=[ 5735], 00:15:25.075 | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6915], 00:15:25.075 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8717], 95.00th=[ 9896], 00:15:25.075 | 99.00th=[11994], 99.50th=[12649], 99.90th=[14615], 99.95th=[15139], 00:15:25.075 | 99.99th=[16581] 00:15:25.075 bw ( KiB/s): min= 8992, max=33696, per=53.85%, avg=28383.73, stdev=7729.30, samples=11 00:15:25.075 iops : min= 2248, max= 8424, avg=7095.91, stdev=1932.31, samples=11 00:15:25.075 write: IOPS=7925, BW=31.0MiB/s (32.5MB/s)(156MiB/5045msec); 0 zone resets 00:15:25.075 slat (usec): min=4, max=7374, avg=48.49, stdev=122.30 00:15:25.075 clat (usec): min=934, max=14173, avg=5756.10, stdev=1500.30 00:15:25.075 lat (usec): min=984, max=14198, avg=5804.59, stdev=1502.87 00:15:25.075 clat percentiles (usec): 00:15:25.075 | 1.00th=[ 2147], 5.00th=[ 2933], 10.00th=[ 3589], 20.00th=[ 4948], 00:15:25.075 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 6063], 00:15:25.075 | 70.00th=[ 6259], 80.00th=[ 6652], 90.00th=[ 7439], 95.00th=[ 8291], 00:15:25.075 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[13173], 99.95th=[13435], 00:15:25.075 | 99.99th=[13960] 00:15:25.075 bw ( KiB/s): min= 9392, max=34283, per=89.53%, avg=28382.09, stdev=7484.70, samples=11 00:15:25.075 iops : min= 2348, max= 8570, avg=7095.45, stdev=1871.12, samples=11 00:15:25.075 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.04% 00:15:25.075 lat (msec) : 2=0.53%, 4=7.10%, 10=89.00%, 20=3.31% 00:15:25.075 cpu : usr=6.60%, sys=24.55%, ctx=7605, majf=0, minf=151 00:15:25.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:25.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.076 issued rwts: total=79126,39983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.076 00:15:25.076 Run status group 0 (all jobs): 00:15:25.076 READ: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=309MiB (324MB), run=6005-6005msec 00:15:25.076 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=156MiB (164MB), run=5045-5045msec 00:15:25.076 00:15:25.076 Disk stats (read/write): 00:15:25.076 nvme0n1: ios=78205/39155, merge=0/0, ticks=488457/208401, in_queue=696858, util=98.65% 00:15:25.076 00:49:37 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:25.076 00:49:37 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.076 00:49:37 -- common/autotest_common.sh@1208 -- # local i=0 00:15:25.076 00:49:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:25.076 00:49:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.076 00:49:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:25.076 00:49:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.076 00:49:37 -- common/autotest_common.sh@1220 -- # return 0 00:15:25.076 00:49:37 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.334 00:49:37 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:25.334 00:49:37 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:25.334 00:49:37 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:25.334 00:49:37 -- target/multipath.sh@144 -- # nvmftestfini 00:15:25.334 00:49:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.334 00:49:37 -- nvmf/common.sh@116 -- # sync 00:15:25.334 00:49:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.334 00:49:37 -- nvmf/common.sh@119 -- # set +e 00:15:25.334 00:49:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.334 00:49:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.334 rmmod nvme_tcp 00:15:25.334 rmmod nvme_fabrics 00:15:25.334 rmmod nvme_keyring 00:15:25.334 00:49:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.335 00:49:37 -- nvmf/common.sh@123 -- # set -e 00:15:25.335 00:49:37 -- nvmf/common.sh@124 -- # return 0 00:15:25.335 00:49:37 -- nvmf/common.sh@477 -- # '[' -n 85666 ']' 00:15:25.335 00:49:37 -- nvmf/common.sh@478 -- # killprocess 85666 00:15:25.335 00:49:37 -- common/autotest_common.sh@936 -- # '[' -z 85666 ']' 00:15:25.335 00:49:37 -- common/autotest_common.sh@940 -- # kill -0 85666 00:15:25.335 00:49:37 -- common/autotest_common.sh@941 -- # uname 00:15:25.335 00:49:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.335 00:49:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85666 00:15:25.335 00:49:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:25.335 00:49:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:25.335 00:49:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85666' 00:15:25.335 killing process with pid 85666 00:15:25.335 00:49:37 -- common/autotest_common.sh@955 -- # kill 85666 00:15:25.335 00:49:37 -- common/autotest_common.sh@960 -- # wait 85666 00:15:25.902 00:49:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.902 00:49:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.902 00:49:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.902 00:49:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.902 00:49:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.902 00:49:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.902 00:49:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.902 00:49:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.902 00:49:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.902 00:15:25.902 real 0m20.639s 00:15:25.902 user 1m20.278s 00:15:25.902 sys 0m6.339s 00:15:25.902 00:49:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:25.902 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 ************************************ 00:15:25.902 END TEST nvmf_multipath 00:15:25.902 ************************************ 00:15:25.902 00:49:38 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:25.902 00:49:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.902 00:49:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.902 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 ************************************ 00:15:25.902 START TEST nvmf_zcopy 00:15:25.902 ************************************ 00:15:25.902 00:49:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:25.902 * Looking for test storage... 00:15:25.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.902 00:49:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:25.903 00:49:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:25.903 00:49:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:25.903 00:49:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:25.903 00:49:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:25.903 00:49:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:25.903 00:49:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:25.903 00:49:38 -- scripts/common.sh@335 -- # IFS=.-: 00:15:26.162 00:49:38 -- scripts/common.sh@335 -- # read -ra ver1 00:15:26.162 00:49:38 -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.162 00:49:38 -- scripts/common.sh@336 -- # read -ra ver2 00:15:26.162 00:49:38 -- scripts/common.sh@337 -- # local 'op=<' 00:15:26.162 00:49:38 -- scripts/common.sh@339 -- # ver1_l=2 00:15:26.162 00:49:38 -- scripts/common.sh@340 -- # ver2_l=1 00:15:26.162 00:49:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:26.162 00:49:38 -- scripts/common.sh@343 -- # case "$op" in 00:15:26.162 00:49:38 -- scripts/common.sh@344 -- # : 1 00:15:26.162 00:49:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:26.162 00:49:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.162 00:49:38 -- scripts/common.sh@364 -- # decimal 1 00:15:26.162 00:49:38 -- scripts/common.sh@352 -- # local d=1 00:15:26.162 00:49:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.162 00:49:38 -- scripts/common.sh@354 -- # echo 1 00:15:26.162 00:49:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:26.162 00:49:38 -- scripts/common.sh@365 -- # decimal 2 00:15:26.162 00:49:38 -- scripts/common.sh@352 -- # local d=2 00:15:26.162 00:49:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.162 00:49:38 -- scripts/common.sh@354 -- # echo 2 00:15:26.162 00:49:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:26.162 00:49:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:26.162 00:49:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:26.162 00:49:38 -- scripts/common.sh@367 -- # return 0 00:15:26.162 00:49:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.162 00:49:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.162 --rc genhtml_branch_coverage=1 00:15:26.162 --rc genhtml_function_coverage=1 00:15:26.162 --rc genhtml_legend=1 00:15:26.162 --rc geninfo_all_blocks=1 00:15:26.162 --rc geninfo_unexecuted_blocks=1 00:15:26.162 00:15:26.162 ' 00:15:26.162 00:49:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.162 --rc genhtml_branch_coverage=1 00:15:26.162 --rc genhtml_function_coverage=1 00:15:26.162 --rc genhtml_legend=1 00:15:26.162 --rc geninfo_all_blocks=1 00:15:26.162 --rc geninfo_unexecuted_blocks=1 00:15:26.162 00:15:26.162 ' 00:15:26.162 00:49:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.162 --rc genhtml_branch_coverage=1 00:15:26.162 --rc genhtml_function_coverage=1 00:15:26.162 --rc genhtml_legend=1 00:15:26.162 --rc geninfo_all_blocks=1 00:15:26.162 --rc geninfo_unexecuted_blocks=1 00:15:26.162 00:15:26.162 ' 00:15:26.162 00:49:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.162 --rc genhtml_branch_coverage=1 00:15:26.162 --rc genhtml_function_coverage=1 00:15:26.162 --rc genhtml_legend=1 00:15:26.162 --rc geninfo_all_blocks=1 00:15:26.162 --rc geninfo_unexecuted_blocks=1 00:15:26.162 00:15:26.162 ' 00:15:26.162 00:49:38 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.162 00:49:38 -- nvmf/common.sh@7 -- # uname -s 00:15:26.162 00:49:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.162 00:49:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.162 00:49:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.162 00:49:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.162 00:49:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.162 00:49:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.162 00:49:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.162 00:49:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.162 00:49:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.162 00:49:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.162 00:49:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:26.162 00:49:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:26.163 00:49:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.163 00:49:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.163 00:49:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.163 00:49:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.163 00:49:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.163 00:49:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.163 00:49:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.163 00:49:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.163 00:49:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.163 00:49:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.163 00:49:38 -- paths/export.sh@5 -- # export PATH 00:15:26.163 00:49:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.163 00:49:38 -- nvmf/common.sh@46 -- # : 0 00:15:26.163 00:49:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.163 00:49:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.163 00:49:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.163 00:49:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.163 00:49:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.163 00:49:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.163 00:49:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.163 00:49:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.163 00:49:38 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:26.163 00:49:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:26.163 00:49:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.163 00:49:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.163 00:49:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.163 00:49:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.163 00:49:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.163 00:49:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.163 00:49:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.163 00:49:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:26.163 00:49:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:26.163 00:49:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:26.163 00:49:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:26.163 00:49:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:26.163 00:49:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:26.163 00:49:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.163 00:49:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.163 00:49:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.163 00:49:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:26.163 00:49:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.163 00:49:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.163 00:49:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.163 00:49:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.163 00:49:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.163 00:49:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.163 00:49:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.163 00:49:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.163 00:49:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:26.163 00:49:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:26.163 Cannot find device "nvmf_tgt_br" 00:15:26.163 00:49:38 -- nvmf/common.sh@154 -- # true 00:15:26.163 00:49:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.163 Cannot find device "nvmf_tgt_br2" 00:15:26.163 00:49:38 -- nvmf/common.sh@155 -- # true 00:15:26.163 00:49:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:26.163 00:49:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:26.163 Cannot find device "nvmf_tgt_br" 00:15:26.163 00:49:38 -- nvmf/common.sh@157 -- # true 00:15:26.163 00:49:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:26.163 Cannot find device "nvmf_tgt_br2" 00:15:26.163 00:49:38 -- nvmf/common.sh@158 -- # true 00:15:26.163 00:49:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:26.163 00:49:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:26.163 00:49:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.163 00:49:38 -- nvmf/common.sh@161 -- # true 00:15:26.163 00:49:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.163 00:49:38 -- nvmf/common.sh@162 -- # true 00:15:26.163 00:49:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.163 00:49:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.163 00:49:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.163 00:49:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.163 00:49:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.423 00:49:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.423 00:49:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.423 00:49:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.423 00:49:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.423 00:49:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:26.423 00:49:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:26.423 00:49:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:26.423 00:49:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:26.423 00:49:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.423 00:49:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.423 00:49:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.423 00:49:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:26.423 00:49:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:26.423 00:49:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.423 00:49:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.423 00:49:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.423 00:49:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.423 00:49:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.423 00:49:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:26.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:26.423 00:15:26.423 --- 10.0.0.2 ping statistics --- 00:15:26.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.423 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:26.423 00:49:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:26.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:26.423 00:15:26.423 --- 10.0.0.3 ping statistics --- 00:15:26.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.423 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:26.423 00:49:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:26.423 00:15:26.423 --- 10.0.0.1 ping statistics --- 00:15:26.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.423 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:26.423 00:49:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.423 00:49:38 -- nvmf/common.sh@421 -- # return 0 00:15:26.423 00:49:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.423 00:49:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.423 00:49:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.423 00:49:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.423 00:49:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.423 00:49:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.423 00:49:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:26.423 00:49:38 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:26.423 00:49:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:26.423 00:49:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.423 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:26.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.423 00:49:38 -- nvmf/common.sh@469 -- # nvmfpid=86270 00:15:26.423 00:49:38 -- nvmf/common.sh@470 -- # waitforlisten 86270 00:15:26.423 00:49:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:26.423 00:49:38 -- common/autotest_common.sh@829 -- # '[' -z 86270 ']' 00:15:26.423 00:49:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.423 00:49:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.423 00:49:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.423 00:49:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.423 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:26.423 [2024-12-03 00:49:38.899865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:26.423 [2024-12-03 00:49:38.899951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.682 [2024-12-03 00:49:39.042769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.682 [2024-12-03 00:49:39.116531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.682 [2024-12-03 00:49:39.116862] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.682 [2024-12-03 00:49:39.116927] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.682 [2024-12-03 00:49:39.117154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.682 [2024-12-03 00:49:39.117233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.616 00:49:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.616 00:49:39 -- common/autotest_common.sh@862 -- # return 0 00:15:27.616 00:49:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.616 00:49:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.616 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 00:49:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.616 00:49:39 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:27.616 00:49:39 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:27.616 00:49:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.616 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 [2024-12-03 00:49:39.972545] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.616 00:49:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.616 00:49:39 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:27.616 00:49:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.616 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 00:49:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.616 00:49:39 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.616 00:49:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.616 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 [2024-12-03 00:49:39.988676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.616 00:49:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.616 00:49:39 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.616 00:49:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.616 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 00:49:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.616 00:49:40 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:27.616 00:49:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.616 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 malloc0 00:15:27.616 00:49:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.616 00:49:40 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:27.616 00:49:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.616 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.616 00:49:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.616 00:49:40 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:27.616 00:49:40 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:27.616 00:49:40 -- nvmf/common.sh@520 -- # config=() 00:15:27.616 00:49:40 -- nvmf/common.sh@520 -- # local subsystem config 00:15:27.616 00:49:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:27.616 00:49:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:27.616 { 00:15:27.616 "params": { 00:15:27.616 "name": "Nvme$subsystem", 00:15:27.616 "trtype": "$TEST_TRANSPORT", 00:15:27.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.616 "adrfam": "ipv4", 00:15:27.617 "trsvcid": "$NVMF_PORT", 00:15:27.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.617 "hdgst": ${hdgst:-false}, 00:15:27.617 "ddgst": ${ddgst:-false} 00:15:27.617 }, 00:15:27.617 "method": "bdev_nvme_attach_controller" 00:15:27.617 } 00:15:27.617 EOF 00:15:27.617 )") 00:15:27.617 00:49:40 -- nvmf/common.sh@542 -- # cat 00:15:27.617 00:49:40 -- nvmf/common.sh@544 -- # jq . 00:15:27.617 00:49:40 -- nvmf/common.sh@545 -- # IFS=, 00:15:27.617 00:49:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:27.617 "params": { 00:15:27.617 "name": "Nvme1", 00:15:27.617 "trtype": "tcp", 00:15:27.617 "traddr": "10.0.0.2", 00:15:27.617 "adrfam": "ipv4", 00:15:27.617 "trsvcid": "4420", 00:15:27.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.617 "hdgst": false, 00:15:27.617 "ddgst": false 00:15:27.617 }, 00:15:27.617 "method": "bdev_nvme_attach_controller" 00:15:27.617 }' 00:15:27.617 [2024-12-03 00:49:40.081341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.617 [2024-12-03 00:49:40.081467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86321 ] 00:15:27.874 [2024-12-03 00:49:40.221538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.874 [2024-12-03 00:49:40.287886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.131 Running I/O for 10 seconds... 00:15:38.099 00:15:38.099 Latency(us) 00:15:38.099 [2024-12-03T00:49:50.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.099 [2024-12-03T00:49:50.614Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:38.099 Verification LBA range: start 0x0 length 0x1000 00:15:38.099 Nvme1n1 : 10.01 10557.47 82.48 0.00 0.00 12095.21 1005.38 22401.40 00:15:38.099 [2024-12-03T00:49:50.614Z] =================================================================================================================== 00:15:38.099 [2024-12-03T00:49:50.614Z] Total : 10557.47 82.48 0.00 0.00 12095.21 1005.38 22401.40 00:15:38.359 00:49:50 -- target/zcopy.sh@39 -- # perfpid=86438 00:15:38.359 00:49:50 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:38.359 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 00:49:50 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:38.359 00:49:50 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:38.359 00:49:50 -- nvmf/common.sh@520 -- # config=() 00:15:38.359 00:49:50 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.359 00:49:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.359 00:49:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.359 { 00:15:38.359 "params": { 00:15:38.359 "name": "Nvme$subsystem", 00:15:38.359 "trtype": "$TEST_TRANSPORT", 00:15:38.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.359 "adrfam": "ipv4", 00:15:38.359 "trsvcid": "$NVMF_PORT", 00:15:38.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.359 "hdgst": ${hdgst:-false}, 00:15:38.359 "ddgst": ${ddgst:-false} 00:15:38.359 }, 00:15:38.359 "method": "bdev_nvme_attach_controller" 00:15:38.359 } 00:15:38.359 EOF 00:15:38.359 )") 00:15:38.359 00:49:50 -- nvmf/common.sh@542 -- # cat 00:15:38.359 00:49:50 -- nvmf/common.sh@544 -- # jq . 00:15:38.359 [2024-12-03 00:49:50.666397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.667575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 00:49:50 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.359 00:49:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.359 "params": { 00:15:38.359 "name": "Nvme1", 00:15:38.359 "trtype": "tcp", 00:15:38.359 "traddr": "10.0.0.2", 00:15:38.359 "adrfam": "ipv4", 00:15:38.359 "trsvcid": "4420", 00:15:38.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.359 "hdgst": false, 00:15:38.359 "ddgst": false 00:15:38.359 }, 00:15:38.359 "method": "bdev_nvme_attach_controller" 00:15:38.359 }' 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.678352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.678387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.690348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.690378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.702351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.702553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.712126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.359 [2024-12-03 00:49:50.712216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86438 ] 00:15:38.359 [2024-12-03 00:49:50.714361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.714539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.726362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.726517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.738364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.738536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.750368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.750530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.762369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.762531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.774371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.774403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.359 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.359 [2024-12-03 00:49:50.786372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.359 [2024-12-03 00:49:50.786402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.798373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.798403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.810378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.810407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.822381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.822426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.834388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.834433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.846389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.846431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.851900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.360 [2024-12-03 00:49:50.858391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.858438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.360 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.360 [2024-12-03 00:49:50.870394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.360 [2024-12-03 00:49:50.870442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.882397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.882449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.894400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.894456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.906399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.906450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.918403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.918442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 [2024-12-03 00:49:50.921062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.930406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.930447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.942409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.942453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.954424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.954452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.966428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.966456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.978429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.978457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:50.990432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:50.990460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.619 2024/12/03 00:49:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.619 [2024-12-03 00:49:51.002435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.619 [2024-12-03 00:49:51.002463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.014439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.014467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.026467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.026500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.038466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.038509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.050474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.050505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.062480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.062512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.074480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.074510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.086509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.086541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 Running I/O for 5 seconds... 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.100942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.100976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.620 [2024-12-03 00:49:51.117059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.620 [2024-12-03 00:49:51.117092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.620 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.134270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.134305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.151545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.151578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.168061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.168094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.184261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.184294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.201000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.201033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.217344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.217377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.235343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.235376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.249234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.249267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.264405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.264448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.282098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.282131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.298635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.298668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.314953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.314986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.331763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.331797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.879 [2024-12-03 00:49:51.348478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.879 [2024-12-03 00:49:51.348511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.879 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.880 [2024-12-03 00:49:51.364945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.880 [2024-12-03 00:49:51.364977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.880 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.880 [2024-12-03 00:49:51.382185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.880 [2024-12-03 00:49:51.382238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.880 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.398454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.398487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.416094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.416128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.432970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.433004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.449594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.449629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.466393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.466436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.483520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.483552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.499008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.499039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.515967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.516000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.533033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.533065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.139 [2024-12-03 00:49:51.548849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.139 [2024-12-03 00:49:51.548880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.139 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.140 [2024-12-03 00:49:51.565434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.140 [2024-12-03 00:49:51.565464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.140 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.140 [2024-12-03 00:49:51.583043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.140 [2024-12-03 00:49:51.583075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.140 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.140 [2024-12-03 00:49:51.597908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.140 [2024-12-03 00:49:51.597953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.140 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.140 [2024-12-03 00:49:51.609568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.140 [2024-12-03 00:49:51.609610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.140 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.140 [2024-12-03 00:49:51.625149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.140 [2024-12-03 00:49:51.625195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.140 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.140 [2024-12-03 00:49:51.642105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.140 [2024-12-03 00:49:51.642138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.140 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.399 [2024-12-03 00:49:51.658774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.399 [2024-12-03 00:49:51.658806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.399 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.399 [2024-12-03 00:49:51.675593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.399 [2024-12-03 00:49:51.675626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.399 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.399 [2024-12-03 00:49:51.691596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.399 [2024-12-03 00:49:51.691629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.399 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.399 [2024-12-03 00:49:51.708840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.399 [2024-12-03 00:49:51.708872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.399 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.399 [2024-12-03 00:49:51.725648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.399 [2024-12-03 00:49:51.725693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.399 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.399 [2024-12-03 00:49:51.741845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.399 [2024-12-03 00:49:51.741878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.759301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.759334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.775210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.775255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.791038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.791071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.808957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.808990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.825165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.825197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.841400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.841452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.857672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.857706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.874650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.874683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.891834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.891867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.400 [2024-12-03 00:49:51.908148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.400 [2024-12-03 00:49:51.908181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.400 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:51.924976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:51.925019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:51.940990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:51.941023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:51.957535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:51.957579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:51.975103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:51.975136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:51.991204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:51.991238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.007984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.008017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.024874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.024907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.040648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.040681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.057951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.057984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.072615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.072648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.089104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.089136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.105147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.105180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.122281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.122315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.138179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.138249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.154998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.155031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.658 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.658 [2024-12-03 00:49:52.171016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.658 [2024-12-03 00:49:52.171059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.917 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.917 [2024-12-03 00:49:52.187970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.917 [2024-12-03 00:49:52.188003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.917 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.917 [2024-12-03 00:49:52.205189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.917 [2024-12-03 00:49:52.205222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.917 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.917 [2024-12-03 00:49:52.221174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.917 [2024-12-03 00:49:52.221206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.917 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.237516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.237548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.254651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.254683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.271511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.271542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.288711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.288743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.304812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.304844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.320704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.320736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.337656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.337687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.354326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.354359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.370816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.370860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.386671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.386704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.403529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.403561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.918 [2024-12-03 00:49:52.420068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.918 [2024-12-03 00:49:52.420101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.918 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.177 [2024-12-03 00:49:52.437446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.177 [2024-12-03 00:49:52.437489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.177 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.177 [2024-12-03 00:49:52.452591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.177 [2024-12-03 00:49:52.452625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.177 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.177 [2024-12-03 00:49:52.470439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.177 [2024-12-03 00:49:52.470483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.177 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.177 [2024-12-03 00:49:52.487243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.177 [2024-12-03 00:49:52.487275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.177 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.177 [2024-12-03 00:49:52.504353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.504385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.519304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.519338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.534760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.534792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.552500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.552532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.567297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.567331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.584555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.584587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.600654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.600687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.618284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.618316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.635392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.635460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.652102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.652134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.668023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.668070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.178 [2024-12-03 00:49:52.685694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.178 [2024-12-03 00:49:52.685738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.178 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.437 [2024-12-03 00:49:52.700111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.437 [2024-12-03 00:49:52.700154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.437 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.437 [2024-12-03 00:49:52.716281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.437 [2024-12-03 00:49:52.716314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.437 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.437 [2024-12-03 00:49:52.732750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.437 [2024-12-03 00:49:52.732793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.437 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.437 [2024-12-03 00:49:52.749808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.437 [2024-12-03 00:49:52.749840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.437 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.437 [2024-12-03 00:49:52.766667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.437 [2024-12-03 00:49:52.766713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.437 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.437 [2024-12-03 00:49:52.782939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.782971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.799365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.799398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.816746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.816779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.833073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.833105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.850136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.850168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.866501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.866558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.882480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.882524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.899130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.899174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.916226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.916258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.933334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.933367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.438 [2024-12-03 00:49:52.948603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.438 [2024-12-03 00:49:52.948649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.438 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:52.960266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:52.960299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:52.976305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:52.976338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:52.993435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:52.993466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.008436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.008480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.025014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.025057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.042717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.042761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.057636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.057669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.075227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.075272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.090887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.090919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.107927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.107959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.125247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.125280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.140309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.140343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.157802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.157834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.174267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.174311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.191161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.191205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.698 [2024-12-03 00:49:53.208517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.698 [2024-12-03 00:49:53.208549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.224280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.224313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.242063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.242095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.257551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.257583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.274955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.274998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.290285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.290327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.307878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.307910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.323823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.323863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.341683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.341716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.357356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.357400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.374686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.374719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.391460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.391492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.408241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.957 [2024-12-03 00:49:53.408273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.957 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.957 [2024-12-03 00:49:53.424996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.958 [2024-12-03 00:49:53.425029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.958 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.958 [2024-12-03 00:49:53.441888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.958 [2024-12-03 00:49:53.441920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.958 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.958 [2024-12-03 00:49:53.458509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.958 [2024-12-03 00:49:53.458541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.958 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.475681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.475715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.492648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.492683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.508702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.508735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.525475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.525506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.541832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.541865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.559352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.559385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.573993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.574038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.591030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.591063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.608075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.608108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.624219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.624251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.640500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.640542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.657394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.657440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.673398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.673457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.690873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.690905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.707151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.707183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.217 [2024-12-03 00:49:53.722835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.217 [2024-12-03 00:49:53.722869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.217 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.740345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.740379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.756552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.756583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.773817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.773862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.790780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.790823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.806346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.806389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.823528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.823559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.839683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.839726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.856534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.856577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.873369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.873428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.889851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.889897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.905635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.905678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.923670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.923704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.938926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.938959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.956356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.956388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.973108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.973151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.477 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.477 [2024-12-03 00:49:53.990596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.477 [2024-12-03 00:49:53.990628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.006946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.006978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.023775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.023809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.040833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.040865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.056775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.056807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.073066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.073098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.089842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.089884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.106423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.106467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.736 [2024-12-03 00:49:54.123490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.736 [2024-12-03 00:49:54.123522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.736 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.139285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.139317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.154859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.154904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.172127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.172160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.188374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.188407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.205445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.205476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.222472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.222505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.737 [2024-12-03 00:49:54.238424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.737 [2024-12-03 00:49:54.238455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.737 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.995 [2024-12-03 00:49:54.255408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.995 [2024-12-03 00:49:54.255451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.995 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.995 [2024-12-03 00:49:54.272160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.272192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.288165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.288198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.305632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.305665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.321793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.321826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.338629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.338675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.355142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.355175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.372361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.372394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.386890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.386922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.402548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.402592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.419065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.419097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.436332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.436379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.451226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.451259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.462571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.462605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.479914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.479947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.996 [2024-12-03 00:49:54.495102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.996 [2024-12-03 00:49:54.495135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.996 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.512606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.512647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.528852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.528885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.544463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.544495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.562061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.562094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.578093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.578126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.594004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.594037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.611400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.611444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.627633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.627666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.644505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.644538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.660576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.660607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.678145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.678178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.693237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.693270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.710337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.710369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.726224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.726266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.744288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.744322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.255 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.255 [2024-12-03 00:49:54.759360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.255 [2024-12-03 00:49:54.759393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.256 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.776905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.776938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.792479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.792510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.809345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.809378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.825618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.825650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.842038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.842082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.858903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.858936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.874335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.874367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.885928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.885959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.902168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.902210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.918216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.918271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.935739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.935777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.951978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.952021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.969071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.969104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:54.984079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:54.984125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:55.001328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.514 [2024-12-03 00:49:55.001360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.514 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.514 [2024-12-03 00:49:55.018358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.515 [2024-12-03 00:49:55.018390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.515 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.035191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.035234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.050224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.050267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.067632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.067665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.084345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.084378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.101492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.101524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.117395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.117441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.134694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.776 [2024-12-03 00:49:55.134738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.776 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.776 [2024-12-03 00:49:55.151035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.151068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.168238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.168272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.182678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.182711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.198824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.198857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.214927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.214959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.232159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.232192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.247575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.247606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.265073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.265106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.777 [2024-12-03 00:49:55.280130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.777 [2024-12-03 00:49:55.280164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.777 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.034 [2024-12-03 00:49:55.297469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.297501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.314434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.314479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.330044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.330077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.347139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.347172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.364457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.364490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.378694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.378728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.394530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.394574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.410652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.410684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.427759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.427791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.444303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.444335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.461914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.461957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.476653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.476686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.488562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.488595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.504170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.504204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.521206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.521238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.035 [2024-12-03 00:49:55.537606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.035 [2024-12-03 00:49:55.537639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.035 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.553987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.554032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.570481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.570530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.587282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.587316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.603626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.603659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.619570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.619602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.637090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.637123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.652342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.652375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.669853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.669886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.687067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.687110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.702296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.702329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.719649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.719681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.293 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.293 [2024-12-03 00:49:55.735787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.293 [2024-12-03 00:49:55.735831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.294 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.294 [2024-12-03 00:49:55.751853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.294 [2024-12-03 00:49:55.751886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.294 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.294 [2024-12-03 00:49:55.768934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.294 [2024-12-03 00:49:55.768967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.294 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.294 [2024-12-03 00:49:55.785120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.294 [2024-12-03 00:49:55.785154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.294 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.294 [2024-12-03 00:49:55.802495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.294 [2024-12-03 00:49:55.802527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.294 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.817584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.817627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.832721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.832754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.849298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.849330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.866587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.866641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.883173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.883207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.899978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.900011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.915903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.915935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.552 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.552 [2024-12-03 00:49:55.932832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.552 [2024-12-03 00:49:55.932864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:55.948802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:55.948835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:55.965752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:55.965784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:55.982385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:55.982442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:55.998782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:55.998825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:56.014856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:56.014889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:56.032153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:56.032187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:56.048659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:56.048691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.553 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.553 [2024-12-03 00:49:56.065656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.553 [2024-12-03 00:49:56.065699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.812 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.812 [2024-12-03 00:49:56.082180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.082246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.096834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.096880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 00:15:43.813 Latency(us) 00:15:43.813 [2024-12-03T00:49:56.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.813 [2024-12-03T00:49:56.328Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:43.813 Nvme1n1 : 5.01 13647.49 106.62 0.00 0.00 9367.03 3291.69 19184.17 00:15:43.813 [2024-12-03T00:49:56.328Z] =================================================================================================================== 00:15:43.813 [2024-12-03T00:49:56.328Z] Total : 13647.49 106.62 0.00 0.00 9367.03 3291.69 19184.17 00:15:43.813 [2024-12-03 00:49:56.105892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.105935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.117893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.117935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.129890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.129919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.141891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.141928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.153892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.153930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.165896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.165924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.177898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.177935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.189901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.189941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.201906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.201945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.213909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.213937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.225909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.225937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.237912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.237951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.249912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.249952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.261916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.261944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.273919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.273956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.285919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.285956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 [2024-12-03 00:49:56.297923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.813 [2024-12-03 00:49:56.297950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.813 2024/12/03 00:49:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.813 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86438) - No such process 00:15:43.813 00:49:56 -- target/zcopy.sh@49 -- # wait 86438 00:15:43.813 00:49:56 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.813 00:49:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.813 00:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:43.813 00:49:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.813 00:49:56 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:43.813 00:49:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.813 00:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:43.813 delay0 00:15:43.813 00:49:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.813 00:49:56 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:43.813 00:49:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.813 00:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:44.072 00:49:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.073 00:49:56 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:44.073 [2024-12-03 00:49:56.501192] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:50.637 Initializing NVMe Controllers 00:15:50.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:50.637 Initialization complete. Launching workers. 00:15:50.637 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:15:50.637 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:15:50.637 success 195, unsuccess 185, failed 0 00:15:50.637 00:50:02 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:50.637 00:50:02 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:50.637 00:50:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.637 00:50:02 -- nvmf/common.sh@116 -- # sync 00:15:50.637 00:50:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.637 00:50:02 -- nvmf/common.sh@119 -- # set +e 00:15:50.637 00:50:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.637 00:50:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.637 rmmod nvme_tcp 00:15:50.637 rmmod nvme_fabrics 00:15:50.637 rmmod nvme_keyring 00:15:50.637 00:50:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.637 00:50:02 -- nvmf/common.sh@123 -- # set -e 00:15:50.637 00:50:02 -- nvmf/common.sh@124 -- # return 0 00:15:50.637 00:50:02 -- nvmf/common.sh@477 -- # '[' -n 86270 ']' 00:15:50.637 00:50:02 -- nvmf/common.sh@478 -- # killprocess 86270 00:15:50.637 00:50:02 -- common/autotest_common.sh@936 -- # '[' -z 86270 ']' 00:15:50.637 00:50:02 -- common/autotest_common.sh@940 -- # kill -0 86270 00:15:50.637 00:50:02 -- common/autotest_common.sh@941 -- # uname 00:15:50.637 00:50:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.637 00:50:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86270 00:15:50.637 killing process with pid 86270 00:15:50.637 00:50:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:50.637 00:50:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:50.637 00:50:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86270' 00:15:50.637 00:50:02 -- common/autotest_common.sh@955 -- # kill 86270 00:15:50.637 00:50:02 -- common/autotest_common.sh@960 -- # wait 86270 00:15:50.637 00:50:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.637 00:50:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.637 00:50:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.637 00:50:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.637 00:50:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.637 00:50:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.637 00:50:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.637 00:50:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.637 00:50:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:50.637 00:15:50.637 real 0m24.739s 00:15:50.637 user 0m38.185s 00:15:50.637 sys 0m7.628s 00:15:50.637 ************************************ 00:15:50.637 END TEST nvmf_zcopy 00:15:50.637 ************************************ 00:15:50.637 00:50:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:50.637 00:50:02 -- common/autotest_common.sh@10 -- # set +x 00:15:50.637 00:50:03 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.637 00:50:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:50.637 00:50:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.637 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:15:50.637 ************************************ 00:15:50.637 START TEST nvmf_nmic 00:15:50.637 ************************************ 00:15:50.637 00:50:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.637 * Looking for test storage... 00:15:50.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.637 00:50:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:50.637 00:50:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:50.637 00:50:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:50.897 00:50:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:50.897 00:50:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:50.897 00:50:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:50.897 00:50:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:50.897 00:50:03 -- scripts/common.sh@335 -- # IFS=.-: 00:15:50.897 00:50:03 -- scripts/common.sh@335 -- # read -ra ver1 00:15:50.897 00:50:03 -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.897 00:50:03 -- scripts/common.sh@336 -- # read -ra ver2 00:15:50.897 00:50:03 -- scripts/common.sh@337 -- # local 'op=<' 00:15:50.897 00:50:03 -- scripts/common.sh@339 -- # ver1_l=2 00:15:50.897 00:50:03 -- scripts/common.sh@340 -- # ver2_l=1 00:15:50.897 00:50:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:50.897 00:50:03 -- scripts/common.sh@343 -- # case "$op" in 00:15:50.897 00:50:03 -- scripts/common.sh@344 -- # : 1 00:15:50.897 00:50:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:50.897 00:50:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.897 00:50:03 -- scripts/common.sh@364 -- # decimal 1 00:15:50.897 00:50:03 -- scripts/common.sh@352 -- # local d=1 00:15:50.897 00:50:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.897 00:50:03 -- scripts/common.sh@354 -- # echo 1 00:15:50.897 00:50:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:50.897 00:50:03 -- scripts/common.sh@365 -- # decimal 2 00:15:50.897 00:50:03 -- scripts/common.sh@352 -- # local d=2 00:15:50.897 00:50:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.897 00:50:03 -- scripts/common.sh@354 -- # echo 2 00:15:50.897 00:50:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:50.897 00:50:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:50.897 00:50:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:50.897 00:50:03 -- scripts/common.sh@367 -- # return 0 00:15:50.897 00:50:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.897 00:50:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:50.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.897 --rc genhtml_branch_coverage=1 00:15:50.897 --rc genhtml_function_coverage=1 00:15:50.897 --rc genhtml_legend=1 00:15:50.897 --rc geninfo_all_blocks=1 00:15:50.897 --rc geninfo_unexecuted_blocks=1 00:15:50.897 00:15:50.897 ' 00:15:50.897 00:50:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:50.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.897 --rc genhtml_branch_coverage=1 00:15:50.897 --rc genhtml_function_coverage=1 00:15:50.897 --rc genhtml_legend=1 00:15:50.897 --rc geninfo_all_blocks=1 00:15:50.897 --rc geninfo_unexecuted_blocks=1 00:15:50.897 00:15:50.897 ' 00:15:50.897 00:50:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:50.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.897 --rc genhtml_branch_coverage=1 00:15:50.897 --rc genhtml_function_coverage=1 00:15:50.897 --rc genhtml_legend=1 00:15:50.897 --rc geninfo_all_blocks=1 00:15:50.897 --rc geninfo_unexecuted_blocks=1 00:15:50.897 00:15:50.897 ' 00:15:50.897 00:50:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:50.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.897 --rc genhtml_branch_coverage=1 00:15:50.897 --rc genhtml_function_coverage=1 00:15:50.897 --rc genhtml_legend=1 00:15:50.897 --rc geninfo_all_blocks=1 00:15:50.897 --rc geninfo_unexecuted_blocks=1 00:15:50.897 00:15:50.897 ' 00:15:50.897 00:50:03 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.897 00:50:03 -- nvmf/common.sh@7 -- # uname -s 00:15:50.897 00:50:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.897 00:50:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.897 00:50:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.897 00:50:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.897 00:50:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.897 00:50:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.897 00:50:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.897 00:50:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.897 00:50:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.897 00:50:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.897 00:50:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:50.897 00:50:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:50.897 00:50:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.897 00:50:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.897 00:50:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.897 00:50:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.897 00:50:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.897 00:50:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.897 00:50:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.897 00:50:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.897 00:50:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.897 00:50:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.897 00:50:03 -- paths/export.sh@5 -- # export PATH 00:15:50.897 00:50:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.897 00:50:03 -- nvmf/common.sh@46 -- # : 0 00:15:50.897 00:50:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.897 00:50:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.897 00:50:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.897 00:50:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.897 00:50:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.897 00:50:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.897 00:50:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.897 00:50:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.897 00:50:03 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.897 00:50:03 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.897 00:50:03 -- target/nmic.sh@14 -- # nvmftestinit 00:15:50.897 00:50:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.897 00:50:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.897 00:50:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.897 00:50:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.897 00:50:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.897 00:50:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.897 00:50:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.897 00:50:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.898 00:50:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:50.898 00:50:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:50.898 00:50:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:50.898 00:50:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:50.898 00:50:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:50.898 00:50:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:50.898 00:50:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.898 00:50:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.898 00:50:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:50.898 00:50:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:50.898 00:50:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.898 00:50:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.898 00:50:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.898 00:50:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.898 00:50:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.898 00:50:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.898 00:50:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.898 00:50:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.898 00:50:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:50.898 00:50:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:50.898 Cannot find device "nvmf_tgt_br" 00:15:50.898 00:50:03 -- nvmf/common.sh@154 -- # true 00:15:50.898 00:50:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.898 Cannot find device "nvmf_tgt_br2" 00:15:50.898 00:50:03 -- nvmf/common.sh@155 -- # true 00:15:50.898 00:50:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:50.898 00:50:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:50.898 Cannot find device "nvmf_tgt_br" 00:15:50.898 00:50:03 -- nvmf/common.sh@157 -- # true 00:15:50.898 00:50:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:50.898 Cannot find device "nvmf_tgt_br2" 00:15:50.898 00:50:03 -- nvmf/common.sh@158 -- # true 00:15:50.898 00:50:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:50.898 00:50:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:50.898 00:50:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.898 00:50:03 -- nvmf/common.sh@161 -- # true 00:15:50.898 00:50:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.898 00:50:03 -- nvmf/common.sh@162 -- # true 00:15:50.898 00:50:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.898 00:50:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.157 00:50:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.157 00:50:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.157 00:50:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.157 00:50:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.157 00:50:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.157 00:50:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.157 00:50:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.157 00:50:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:51.157 00:50:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:51.157 00:50:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:51.157 00:50:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:51.157 00:50:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.157 00:50:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.157 00:50:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.157 00:50:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:51.157 00:50:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:51.157 00:50:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.157 00:50:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.157 00:50:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.157 00:50:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.157 00:50:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.157 00:50:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:51.157 00:15:51.157 --- 10.0.0.2 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:51.157 00:50:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:51.157 00:15:51.157 --- 10.0.0.3 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:51.157 00:50:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:51.157 00:15:51.157 --- 10.0.0.1 ping statistics --- 00:15:51.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.157 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:51.157 00:50:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.157 00:50:03 -- nvmf/common.sh@421 -- # return 0 00:15:51.157 00:50:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.157 00:50:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.157 00:50:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.157 00:50:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.157 00:50:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.157 00:50:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.157 00:50:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.157 00:50:03 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:51.157 00:50:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.157 00:50:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.157 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:15:51.157 00:50:03 -- nvmf/common.sh@469 -- # nvmfpid=86767 00:15:51.157 00:50:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.157 00:50:03 -- nvmf/common.sh@470 -- # waitforlisten 86767 00:15:51.157 00:50:03 -- common/autotest_common.sh@829 -- # '[' -z 86767 ']' 00:15:51.157 00:50:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.157 00:50:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.157 00:50:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.157 00:50:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.157 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:15:51.416 [2024-12-03 00:50:03.681308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.416 [2024-12-03 00:50:03.681401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.416 [2024-12-03 00:50:03.826057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.416 [2024-12-03 00:50:03.901845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.416 [2024-12-03 00:50:03.902334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.416 [2024-12-03 00:50:03.902407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.416 [2024-12-03 00:50:03.902655] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.416 [2024-12-03 00:50:03.902828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.416 [2024-12-03 00:50:03.903010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.416 [2024-12-03 00:50:03.903081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.416 [2024-12-03 00:50:03.903081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.353 00:50:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.353 00:50:04 -- common/autotest_common.sh@862 -- # return 0 00:15:52.353 00:50:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:52.353 00:50:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 00:50:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.353 00:50:04 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 [2024-12-03 00:50:04.761683] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 Malloc0 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 [2024-12-03 00:50:04.828921] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.353 test case1: single bdev can't be used in multiple subsystems 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:52.353 00:50:04 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@28 -- # nmic_status=0 00:15:52.353 00:50:04 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 [2024-12-03 00:50:04.852711] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:52.353 [2024-12-03 00:50:04.852758] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:52.353 [2024-12-03 00:50:04.852777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.353 2024/12/03 00:50:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.353 request: 00:15:52.353 { 00:15:52.353 "method": "nvmf_subsystem_add_ns", 00:15:52.353 "params": { 00:15:52.353 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:52.353 "namespace": { 00:15:52.353 "bdev_name": "Malloc0" 00:15:52.353 } 00:15:52.353 } 00:15:52.353 } 00:15:52.353 Got JSON-RPC error response 00:15:52.353 GoRPCClient: error on JSON-RPC call 00:15:52.353 Adding namespace failed - expected result. 00:15:52.353 test case2: host connect to nvmf target in multiple paths 00:15:52.353 00:50:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:52.353 00:50:04 -- target/nmic.sh@29 -- # nmic_status=1 00:15:52.353 00:50:04 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:52.353 00:50:04 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:52.353 00:50:04 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:52.353 00:50:04 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.353 00:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.353 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.353 [2024-12-03 00:50:04.864840] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.612 00:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.612 00:50:04 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.612 00:50:05 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:52.870 00:50:05 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.870 00:50:05 -- common/autotest_common.sh@1187 -- # local i=0 00:15:52.870 00:50:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.870 00:50:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:52.870 00:50:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:54.773 00:50:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:54.773 00:50:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:54.773 00:50:07 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.773 00:50:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:54.773 00:50:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.773 00:50:07 -- common/autotest_common.sh@1197 -- # return 0 00:15:54.773 00:50:07 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:54.773 [global] 00:15:54.773 thread=1 00:15:54.773 invalidate=1 00:15:54.773 rw=write 00:15:54.773 time_based=1 00:15:54.773 runtime=1 00:15:54.773 ioengine=libaio 00:15:54.773 direct=1 00:15:54.773 bs=4096 00:15:54.773 iodepth=1 00:15:54.773 norandommap=0 00:15:54.773 numjobs=1 00:15:54.773 00:15:54.773 verify_dump=1 00:15:54.773 verify_backlog=512 00:15:54.773 verify_state_save=0 00:15:54.773 do_verify=1 00:15:54.773 verify=crc32c-intel 00:15:54.773 [job0] 00:15:54.773 filename=/dev/nvme0n1 00:15:54.773 Could not set queue depth (nvme0n1) 00:15:55.031 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.031 fio-3.35 00:15:55.031 Starting 1 thread 00:15:56.444 00:15:56.444 job0: (groupid=0, jobs=1): err= 0: pid=86877: Tue Dec 3 00:50:08 2024 00:15:56.444 read: IOPS=3143, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:15:56.444 slat (nsec): min=11264, max=58777, avg=14490.57, stdev=4884.22 00:15:56.444 clat (usec): min=114, max=244, avg=150.10, stdev=17.57 00:15:56.444 lat (usec): min=126, max=260, avg=164.59, stdev=18.71 00:15:56.444 clat percentiles (usec): 00:15:56.444 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:15:56.444 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:15:56.444 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 184], 00:15:56.444 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 243], 00:15:56.444 | 99.99th=[ 245] 00:15:56.444 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:56.444 slat (nsec): min=16121, max=90690, avg=21091.92, stdev=6954.93 00:15:56.444 clat (usec): min=81, max=3366, avg=110.50, stdev=90.60 00:15:56.444 lat (usec): min=99, max=3394, avg=131.60, stdev=91.36 00:15:56.444 clat percentiles (usec): 00:15:56.444 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:15:56.444 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 108], 00:15:56.444 | 70.00th=[ 112], 80.00th=[ 119], 90.00th=[ 129], 95.00th=[ 139], 00:15:56.444 | 99.00th=[ 159], 99.50th=[ 176], 99.90th=[ 1270], 99.95th=[ 3326], 00:15:56.444 | 99.99th=[ 3359] 00:15:56.444 bw ( KiB/s): min=13648, max=13648, per=95.30%, avg=13648.00, stdev= 0.00, samples=1 00:15:56.444 iops : min= 3412, max= 3412, avg=3412.00, stdev= 0.00, samples=1 00:15:56.444 lat (usec) : 100=21.23%, 250=78.65%, 750=0.01%, 1000=0.01% 00:15:56.444 lat (msec) : 2=0.06%, 4=0.03% 00:15:56.444 cpu : usr=2.30%, sys=8.90%, ctx=6732, majf=0, minf=5 00:15:56.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.444 issued rwts: total=3147,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.444 00:15:56.444 Run status group 0 (all jobs): 00:15:56.444 READ: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:15:56.444 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:15:56.444 00:15:56.444 Disk stats (read/write): 00:15:56.444 nvme0n1: ios=2947/3072, merge=0/0, ticks=473/358, in_queue=831, util=90.28% 00:15:56.444 00:50:08 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:56.444 00:50:08 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.444 00:50:08 -- common/autotest_common.sh@1208 -- # local i=0 00:15:56.444 00:50:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:56.444 00:50:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.444 00:50:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:56.444 00:50:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.444 00:50:08 -- common/autotest_common.sh@1220 -- # return 0 00:15:56.444 00:50:08 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:56.444 00:50:08 -- target/nmic.sh@53 -- # nvmftestfini 00:15:56.444 00:50:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:56.444 00:50:08 -- nvmf/common.sh@116 -- # sync 00:15:56.444 00:50:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:56.444 00:50:08 -- nvmf/common.sh@119 -- # set +e 00:15:56.444 00:50:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:56.444 00:50:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:56.444 rmmod nvme_tcp 00:15:56.444 rmmod nvme_fabrics 00:15:56.444 rmmod nvme_keyring 00:15:56.444 00:50:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:56.444 00:50:08 -- nvmf/common.sh@123 -- # set -e 00:15:56.444 00:50:08 -- nvmf/common.sh@124 -- # return 0 00:15:56.444 00:50:08 -- nvmf/common.sh@477 -- # '[' -n 86767 ']' 00:15:56.444 00:50:08 -- nvmf/common.sh@478 -- # killprocess 86767 00:15:56.444 00:50:08 -- common/autotest_common.sh@936 -- # '[' -z 86767 ']' 00:15:56.444 00:50:08 -- common/autotest_common.sh@940 -- # kill -0 86767 00:15:56.444 00:50:08 -- common/autotest_common.sh@941 -- # uname 00:15:56.444 00:50:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.444 00:50:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86767 00:15:56.444 killing process with pid 86767 00:15:56.444 00:50:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.444 00:50:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.444 00:50:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86767' 00:15:56.444 00:50:08 -- common/autotest_common.sh@955 -- # kill 86767 00:15:56.444 00:50:08 -- common/autotest_common.sh@960 -- # wait 86767 00:15:56.715 00:50:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:56.715 00:50:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:56.715 00:50:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:56.715 00:50:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.715 00:50:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:56.715 00:50:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.715 00:50:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.715 00:50:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.715 00:50:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:56.715 00:15:56.715 real 0m6.061s 00:15:56.715 user 0m20.350s 00:15:56.715 sys 0m1.304s 00:15:56.715 00:50:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:56.715 ************************************ 00:15:56.715 END TEST nvmf_nmic 00:15:56.715 ************************************ 00:15:56.715 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 00:50:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:56.715 00:50:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:56.715 00:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:56.715 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 ************************************ 00:15:56.715 START TEST nvmf_fio_target 00:15:56.715 ************************************ 00:15:56.715 00:50:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:56.716 * Looking for test storage... 00:15:56.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.716 00:50:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:56.975 00:50:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:56.975 00:50:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:56.975 00:50:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:56.975 00:50:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:56.975 00:50:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:56.975 00:50:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:56.975 00:50:09 -- scripts/common.sh@335 -- # IFS=.-: 00:15:56.975 00:50:09 -- scripts/common.sh@335 -- # read -ra ver1 00:15:56.975 00:50:09 -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.975 00:50:09 -- scripts/common.sh@336 -- # read -ra ver2 00:15:56.975 00:50:09 -- scripts/common.sh@337 -- # local 'op=<' 00:15:56.975 00:50:09 -- scripts/common.sh@339 -- # ver1_l=2 00:15:56.975 00:50:09 -- scripts/common.sh@340 -- # ver2_l=1 00:15:56.975 00:50:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:56.975 00:50:09 -- scripts/common.sh@343 -- # case "$op" in 00:15:56.975 00:50:09 -- scripts/common.sh@344 -- # : 1 00:15:56.975 00:50:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:56.975 00:50:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.975 00:50:09 -- scripts/common.sh@364 -- # decimal 1 00:15:56.975 00:50:09 -- scripts/common.sh@352 -- # local d=1 00:15:56.975 00:50:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.975 00:50:09 -- scripts/common.sh@354 -- # echo 1 00:15:56.975 00:50:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:56.975 00:50:09 -- scripts/common.sh@365 -- # decimal 2 00:15:56.975 00:50:09 -- scripts/common.sh@352 -- # local d=2 00:15:56.975 00:50:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.975 00:50:09 -- scripts/common.sh@354 -- # echo 2 00:15:56.975 00:50:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:56.975 00:50:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:56.975 00:50:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:56.975 00:50:09 -- scripts/common.sh@367 -- # return 0 00:15:56.975 00:50:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.975 00:50:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:56.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.975 --rc genhtml_branch_coverage=1 00:15:56.975 --rc genhtml_function_coverage=1 00:15:56.975 --rc genhtml_legend=1 00:15:56.975 --rc geninfo_all_blocks=1 00:15:56.975 --rc geninfo_unexecuted_blocks=1 00:15:56.975 00:15:56.975 ' 00:15:56.975 00:50:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:56.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.975 --rc genhtml_branch_coverage=1 00:15:56.975 --rc genhtml_function_coverage=1 00:15:56.975 --rc genhtml_legend=1 00:15:56.975 --rc geninfo_all_blocks=1 00:15:56.975 --rc geninfo_unexecuted_blocks=1 00:15:56.975 00:15:56.975 ' 00:15:56.975 00:50:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:56.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.975 --rc genhtml_branch_coverage=1 00:15:56.975 --rc genhtml_function_coverage=1 00:15:56.975 --rc genhtml_legend=1 00:15:56.975 --rc geninfo_all_blocks=1 00:15:56.975 --rc geninfo_unexecuted_blocks=1 00:15:56.975 00:15:56.975 ' 00:15:56.975 00:50:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:56.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.975 --rc genhtml_branch_coverage=1 00:15:56.975 --rc genhtml_function_coverage=1 00:15:56.975 --rc genhtml_legend=1 00:15:56.975 --rc geninfo_all_blocks=1 00:15:56.975 --rc geninfo_unexecuted_blocks=1 00:15:56.975 00:15:56.975 ' 00:15:56.975 00:50:09 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.975 00:50:09 -- nvmf/common.sh@7 -- # uname -s 00:15:56.975 00:50:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.975 00:50:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.975 00:50:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.975 00:50:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.975 00:50:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.975 00:50:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.975 00:50:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.975 00:50:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.975 00:50:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.975 00:50:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.975 00:50:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:56.975 00:50:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:15:56.975 00:50:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.975 00:50:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.975 00:50:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.975 00:50:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.975 00:50:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.975 00:50:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.975 00:50:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.975 00:50:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.975 00:50:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.975 00:50:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.975 00:50:09 -- paths/export.sh@5 -- # export PATH 00:15:56.975 00:50:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.975 00:50:09 -- nvmf/common.sh@46 -- # : 0 00:15:56.975 00:50:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:56.975 00:50:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:56.975 00:50:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:56.975 00:50:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.975 00:50:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.975 00:50:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:56.975 00:50:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:56.975 00:50:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:56.975 00:50:09 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.975 00:50:09 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.975 00:50:09 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.975 00:50:09 -- target/fio.sh@16 -- # nvmftestinit 00:15:56.976 00:50:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:56.976 00:50:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.976 00:50:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:56.976 00:50:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:56.976 00:50:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:56.976 00:50:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.976 00:50:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.976 00:50:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.976 00:50:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:56.976 00:50:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:56.976 00:50:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:56.976 00:50:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:56.976 00:50:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:56.976 00:50:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:56.976 00:50:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.976 00:50:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.976 00:50:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.976 00:50:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:56.976 00:50:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.976 00:50:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.976 00:50:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.976 00:50:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.976 00:50:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.976 00:50:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.976 00:50:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.976 00:50:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.976 00:50:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:56.976 00:50:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:56.976 Cannot find device "nvmf_tgt_br" 00:15:56.976 00:50:09 -- nvmf/common.sh@154 -- # true 00:15:56.976 00:50:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.976 Cannot find device "nvmf_tgt_br2" 00:15:56.976 00:50:09 -- nvmf/common.sh@155 -- # true 00:15:56.976 00:50:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:56.976 00:50:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:56.976 Cannot find device "nvmf_tgt_br" 00:15:56.976 00:50:09 -- nvmf/common.sh@157 -- # true 00:15:56.976 00:50:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:56.976 Cannot find device "nvmf_tgt_br2" 00:15:56.976 00:50:09 -- nvmf/common.sh@158 -- # true 00:15:56.976 00:50:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:56.976 00:50:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:56.976 00:50:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.976 00:50:09 -- nvmf/common.sh@161 -- # true 00:15:56.976 00:50:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.976 00:50:09 -- nvmf/common.sh@162 -- # true 00:15:56.976 00:50:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.234 00:50:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.234 00:50:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.234 00:50:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.234 00:50:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.234 00:50:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.234 00:50:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.235 00:50:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.235 00:50:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:57.235 00:50:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:57.235 00:50:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:57.235 00:50:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:57.235 00:50:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:57.235 00:50:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.235 00:50:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.235 00:50:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.235 00:50:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:57.235 00:50:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:57.235 00:50:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.235 00:50:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.235 00:50:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.235 00:50:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.235 00:50:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.235 00:50:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:57.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:57.235 00:15:57.235 --- 10.0.0.2 ping statistics --- 00:15:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.235 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:57.235 00:50:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:57.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:57.235 00:15:57.235 --- 10.0.0.3 ping statistics --- 00:15:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.235 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:57.235 00:50:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:57.235 00:15:57.235 --- 10.0.0.1 ping statistics --- 00:15:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.235 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:57.235 00:50:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.235 00:50:09 -- nvmf/common.sh@421 -- # return 0 00:15:57.235 00:50:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:57.235 00:50:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.235 00:50:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:57.235 00:50:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:57.235 00:50:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.235 00:50:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:57.235 00:50:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:57.235 00:50:09 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:57.235 00:50:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:57.235 00:50:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.235 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:15:57.235 00:50:09 -- nvmf/common.sh@469 -- # nvmfpid=87061 00:15:57.235 00:50:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.235 00:50:09 -- nvmf/common.sh@470 -- # waitforlisten 87061 00:15:57.235 00:50:09 -- common/autotest_common.sh@829 -- # '[' -z 87061 ']' 00:15:57.235 00:50:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.235 00:50:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.235 00:50:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.235 00:50:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.235 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:15:57.494 [2024-12-03 00:50:09.755006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:57.494 [2024-12-03 00:50:09.755186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.494 [2024-12-03 00:50:09.891157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.494 [2024-12-03 00:50:09.971521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:57.494 [2024-12-03 00:50:09.972042] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.494 [2024-12-03 00:50:09.972070] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.494 [2024-12-03 00:50:09.972082] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.494 [2024-12-03 00:50:09.972255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.494 [2024-12-03 00:50:09.972639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.494 [2024-12-03 00:50:09.972771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.494 [2024-12-03 00:50:09.972785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.430 00:50:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.430 00:50:10 -- common/autotest_common.sh@862 -- # return 0 00:15:58.430 00:50:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.430 00:50:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.430 00:50:10 -- common/autotest_common.sh@10 -- # set +x 00:15:58.430 00:50:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.430 00:50:10 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:58.689 [2024-12-03 00:50:11.090968] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.689 00:50:11 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:58.947 00:50:11 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:58.947 00:50:11 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.206 00:50:11 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:59.206 00:50:11 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.465 00:50:11 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:59.465 00:50:11 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.724 00:50:12 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:59.724 00:50:12 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:59.983 00:50:12 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.242 00:50:12 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:00.242 00:50:12 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.501 00:50:12 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:00.501 00:50:12 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.760 00:50:13 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:00.760 00:50:13 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:01.018 00:50:13 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:01.276 00:50:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:01.276 00:50:13 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.534 00:50:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:01.534 00:50:13 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.792 00:50:14 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.051 [2024-12-03 00:50:14.442732] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.051 00:50:14 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:02.309 00:50:14 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:02.567 00:50:14 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.567 00:50:15 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:02.567 00:50:15 -- common/autotest_common.sh@1187 -- # local i=0 00:16:02.567 00:50:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.567 00:50:15 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:02.567 00:50:15 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:02.567 00:50:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:05.102 00:50:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:05.102 00:50:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:05.102 00:50:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.102 00:50:17 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:05.102 00:50:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.102 00:50:17 -- common/autotest_common.sh@1197 -- # return 0 00:16:05.102 00:50:17 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:05.102 [global] 00:16:05.102 thread=1 00:16:05.102 invalidate=1 00:16:05.102 rw=write 00:16:05.102 time_based=1 00:16:05.102 runtime=1 00:16:05.102 ioengine=libaio 00:16:05.102 direct=1 00:16:05.102 bs=4096 00:16:05.102 iodepth=1 00:16:05.102 norandommap=0 00:16:05.102 numjobs=1 00:16:05.102 00:16:05.102 verify_dump=1 00:16:05.102 verify_backlog=512 00:16:05.102 verify_state_save=0 00:16:05.102 do_verify=1 00:16:05.102 verify=crc32c-intel 00:16:05.102 [job0] 00:16:05.102 filename=/dev/nvme0n1 00:16:05.102 [job1] 00:16:05.102 filename=/dev/nvme0n2 00:16:05.102 [job2] 00:16:05.102 filename=/dev/nvme0n3 00:16:05.102 [job3] 00:16:05.102 filename=/dev/nvme0n4 00:16:05.102 Could not set queue depth (nvme0n1) 00:16:05.102 Could not set queue depth (nvme0n2) 00:16:05.102 Could not set queue depth (nvme0n3) 00:16:05.102 Could not set queue depth (nvme0n4) 00:16:05.102 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.102 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.102 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.102 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.102 fio-3.35 00:16:05.102 Starting 4 threads 00:16:06.040 00:16:06.040 job0: (groupid=0, jobs=1): err= 0: pid=87354: Tue Dec 3 00:50:18 2024 00:16:06.040 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:06.040 slat (nsec): min=12733, max=52631, avg=15771.79, stdev=4432.96 00:16:06.040 clat (usec): min=131, max=335, avg=217.38, stdev=27.88 00:16:06.040 lat (usec): min=145, max=370, avg=233.15, stdev=28.24 00:16:06.040 clat percentiles (usec): 00:16:06.040 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:16:06.040 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 223], 00:16:06.040 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:16:06.040 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 330], 99.95th=[ 334], 00:16:06.040 | 99.99th=[ 334] 00:16:06.040 write: IOPS=2381, BW=9526KiB/s (9755kB/s)(9536KiB/1001msec); 0 zone resets 00:16:06.040 slat (usec): min=18, max=105, avg=23.54, stdev= 6.15 00:16:06.040 clat (usec): min=104, max=393, avg=192.57, stdev=29.32 00:16:06.040 lat (usec): min=123, max=418, avg=216.11, stdev=29.95 00:16:06.040 clat percentiles (usec): 00:16:06.040 | 1.00th=[ 127], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 169], 00:16:06.040 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 198], 00:16:06.040 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 245], 00:16:06.040 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 338], 99.95th=[ 375], 00:16:06.040 | 99.99th=[ 396] 00:16:06.040 bw ( KiB/s): min= 9416, max= 9416, per=24.90%, avg=9416.00, stdev= 0.00, samples=1 00:16:06.040 iops : min= 2354, max= 2354, avg=2354.00, stdev= 0.00, samples=1 00:16:06.040 lat (usec) : 250=93.16%, 500=6.84% 00:16:06.040 cpu : usr=1.60%, sys=6.40%, ctx=4432, majf=0, minf=7 00:16:06.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.040 issued rwts: total=2048,2384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.040 job1: (groupid=0, jobs=1): err= 0: pid=87355: Tue Dec 3 00:50:18 2024 00:16:06.040 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:06.040 slat (nsec): min=13413, max=90799, avg=17128.46, stdev=5267.91 00:16:06.040 clat (usec): min=125, max=327, avg=211.95, stdev=24.84 00:16:06.040 lat (usec): min=139, max=350, avg=229.07, stdev=25.31 00:16:06.040 clat percentiles (usec): 00:16:06.040 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:16:06.040 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:16:06.040 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 253], 00:16:06.040 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 326], 00:16:06.040 | 99.99th=[ 326] 00:16:06.040 write: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(9.81MiB/1001msec); 0 zone resets 00:16:06.040 slat (nsec): min=18529, max=89868, avg=25435.12, stdev=7032.73 00:16:06.040 clat (usec): min=89, max=310, avg=183.01, stdev=29.35 00:16:06.040 lat (usec): min=110, max=331, avg=208.45, stdev=29.98 00:16:06.040 clat percentiles (usec): 00:16:06.040 | 1.00th=[ 106], 5.00th=[ 135], 10.00th=[ 149], 20.00th=[ 161], 00:16:06.040 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:16:06.040 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 231], 00:16:06.040 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 289], 00:16:06.040 | 99.99th=[ 310] 00:16:06.040 bw ( KiB/s): min= 9795, max= 9795, per=25.91%, avg=9795.00, stdev= 0.00, samples=1 00:16:06.040 iops : min= 2448, max= 2448, avg=2448.00, stdev= 0.00, samples=1 00:16:06.040 lat (usec) : 100=0.33%, 250=95.90%, 500=3.77% 00:16:06.040 cpu : usr=2.20%, sys=6.60%, ctx=4560, majf=0, minf=15 00:16:06.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.040 issued rwts: total=2048,2512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.040 job2: (groupid=0, jobs=1): err= 0: pid=87356: Tue Dec 3 00:50:18 2024 00:16:06.040 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:06.040 slat (nsec): min=12123, max=56803, avg=15138.68, stdev=4284.76 00:16:06.040 clat (usec): min=148, max=461, avg=222.48, stdev=33.01 00:16:06.040 lat (usec): min=162, max=477, avg=237.61, stdev=33.31 00:16:06.040 clat percentiles (usec): 00:16:06.040 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 198], 00:16:06.040 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:16:06.040 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 281], 00:16:06.040 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 457], 99.95th=[ 461], 00:16:06.040 | 99.99th=[ 461] 00:16:06.040 write: IOPS=2237, BW=8951KiB/s (9166kB/s)(8960KiB/1001msec); 0 zone resets 00:16:06.040 slat (nsec): min=17066, max=87200, avg=23463.91, stdev=6715.15 00:16:06.040 clat (usec): min=108, max=2372, avg=202.70, stdev=65.79 00:16:06.040 lat (usec): min=126, max=2397, avg=226.17, stdev=66.21 00:16:06.040 clat percentiles (usec): 00:16:06.040 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 176], 00:16:06.040 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 206], 00:16:06.040 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 255], 00:16:06.040 | 99.00th=[ 289], 99.50th=[ 355], 99.90th=[ 783], 99.95th=[ 1729], 00:16:06.040 | 99.99th=[ 2376] 00:16:06.040 bw ( KiB/s): min= 8817, max= 9112, per=23.71%, avg=8964.50, stdev=208.60, samples=2 00:16:06.040 iops : min= 2204, max= 2278, avg=2241.00, stdev=52.33, samples=2 00:16:06.040 lat (usec) : 250=89.48%, 500=10.45%, 1000=0.02% 00:16:06.040 lat (msec) : 2=0.02%, 4=0.02% 00:16:06.040 cpu : usr=1.10%, sys=6.60%, ctx=4288, majf=0, minf=7 00:16:06.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.040 issued rwts: total=2048,2240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.040 job3: (groupid=0, jobs=1): err= 0: pid=87357: Tue Dec 3 00:50:18 2024 00:16:06.040 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:06.040 slat (nsec): min=13698, max=64850, avg=16798.69, stdev=4988.67 00:16:06.040 clat (usec): min=133, max=331, avg=217.63, stdev=26.86 00:16:06.040 lat (usec): min=148, max=349, avg=234.43, stdev=27.29 00:16:06.041 clat percentiles (usec): 00:16:06.041 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 196], 00:16:06.041 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:16:06.041 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:16:06.041 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 326], 00:16:06.041 | 99.99th=[ 334] 00:16:06.041 write: IOPS=2323, BW=9295KiB/s (9518kB/s)(9304KiB/1001msec); 0 zone resets 00:16:06.041 slat (usec): min=19, max=130, avg=24.99, stdev= 6.77 00:16:06.041 clat (usec): min=109, max=1665, avg=195.61, stdev=47.94 00:16:06.041 lat (usec): min=130, max=1685, avg=220.60, stdev=48.17 00:16:06.041 clat percentiles (usec): 00:16:06.041 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:16:06.041 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:16:06.041 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 233], 95.00th=[ 247], 00:16:06.041 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 619], 99.95th=[ 1237], 00:16:06.041 | 99.99th=[ 1663] 00:16:06.041 bw ( KiB/s): min= 9208, max= 9208, per=24.35%, avg=9208.00, stdev= 0.00, samples=1 00:16:06.041 iops : min= 2302, max= 2302, avg=2302.00, stdev= 0.00, samples=1 00:16:06.041 lat (usec) : 250=92.89%, 500=7.04%, 750=0.02% 00:16:06.041 lat (msec) : 2=0.05% 00:16:06.041 cpu : usr=2.00%, sys=6.10%, ctx=4375, majf=0, minf=7 00:16:06.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.041 issued rwts: total=2048,2326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.041 00:16:06.041 Run status group 0 (all jobs): 00:16:06.041 READ: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:16:06.041 WRITE: bw=36.9MiB/s (38.7MB/s), 8951KiB/s-9.80MiB/s (9166kB/s-10.3MB/s), io=37.0MiB (38.8MB), run=1001-1001msec 00:16:06.041 00:16:06.041 Disk stats (read/write): 00:16:06.041 nvme0n1: ios=1756/2048, merge=0/0, ticks=393/418, in_queue=811, util=86.67% 00:16:06.041 nvme0n2: ios=1829/2048, merge=0/0, ticks=423/404, in_queue=827, util=88.00% 00:16:06.041 nvme0n3: ios=1598/2048, merge=0/0, ticks=362/427, in_queue=789, util=88.99% 00:16:06.041 nvme0n4: ios=1664/2048, merge=0/0, ticks=367/411, in_queue=778, util=89.56% 00:16:06.041 00:50:18 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:06.041 [global] 00:16:06.041 thread=1 00:16:06.041 invalidate=1 00:16:06.041 rw=randwrite 00:16:06.041 time_based=1 00:16:06.041 runtime=1 00:16:06.041 ioengine=libaio 00:16:06.041 direct=1 00:16:06.041 bs=4096 00:16:06.041 iodepth=1 00:16:06.041 norandommap=0 00:16:06.041 numjobs=1 00:16:06.041 00:16:06.041 verify_dump=1 00:16:06.041 verify_backlog=512 00:16:06.041 verify_state_save=0 00:16:06.041 do_verify=1 00:16:06.041 verify=crc32c-intel 00:16:06.041 [job0] 00:16:06.041 filename=/dev/nvme0n1 00:16:06.041 [job1] 00:16:06.041 filename=/dev/nvme0n2 00:16:06.041 [job2] 00:16:06.041 filename=/dev/nvme0n3 00:16:06.041 [job3] 00:16:06.041 filename=/dev/nvme0n4 00:16:06.300 Could not set queue depth (nvme0n1) 00:16:06.300 Could not set queue depth (nvme0n2) 00:16:06.300 Could not set queue depth (nvme0n3) 00:16:06.300 Could not set queue depth (nvme0n4) 00:16:06.300 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.300 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.300 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.300 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.300 fio-3.35 00:16:06.300 Starting 4 threads 00:16:07.677 00:16:07.677 job0: (groupid=0, jobs=1): err= 0: pid=87416: Tue Dec 3 00:50:19 2024 00:16:07.677 read: IOPS=1111, BW=4448KiB/s (4554kB/s)(4452KiB/1001msec) 00:16:07.677 slat (nsec): min=17086, max=73493, avg=30986.57, stdev=8522.33 00:16:07.677 clat (usec): min=195, max=3486, avg=377.41, stdev=104.69 00:16:07.677 lat (usec): min=214, max=3506, avg=408.39, stdev=104.71 00:16:07.677 clat percentiles (usec): 00:16:07.677 | 1.00th=[ 243], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 343], 00:16:07.677 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:16:07.677 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 433], 95.00th=[ 453], 00:16:07.677 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 619], 99.95th=[ 3490], 00:16:07.677 | 99.99th=[ 3490] 00:16:07.677 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:07.677 slat (usec): min=28, max=108, avg=45.39, stdev= 9.90 00:16:07.677 clat (usec): min=151, max=678, avg=304.13, stdev=63.33 00:16:07.677 lat (usec): min=200, max=730, avg=349.52, stdev=63.47 00:16:07.677 clat percentiles (usec): 00:16:07.677 | 1.00th=[ 182], 5.00th=[ 219], 10.00th=[ 235], 20.00th=[ 253], 00:16:07.677 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 306], 00:16:07.677 | 70.00th=[ 326], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 420], 00:16:07.677 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 537], 99.95th=[ 676], 00:16:07.677 | 99.99th=[ 676] 00:16:07.677 bw ( KiB/s): min= 6400, max= 6400, per=19.68%, avg=6400.00, stdev= 0.00, samples=1 00:16:07.677 iops : min= 1600, max= 1600, avg=1600.00, stdev= 0.00, samples=1 00:16:07.677 lat (usec) : 250=10.68%, 500=88.71%, 750=0.57% 00:16:07.677 lat (msec) : 4=0.04% 00:16:07.677 cpu : usr=1.80%, sys=7.90%, ctx=2660, majf=0, minf=13 00:16:07.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.677 issued rwts: total=1113,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.677 job1: (groupid=0, jobs=1): err= 0: pid=87417: Tue Dec 3 00:50:19 2024 00:16:07.677 read: IOPS=2147, BW=8591KiB/s (8798kB/s)(8600KiB/1001msec) 00:16:07.677 slat (nsec): min=13327, max=59107, avg=16087.40, stdev=5299.61 00:16:07.677 clat (usec): min=159, max=453, avg=205.99, stdev=22.05 00:16:07.677 lat (usec): min=174, max=468, avg=222.08, stdev=22.66 00:16:07.677 clat percentiles (usec): 00:16:07.677 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:16:07.677 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:16:07.677 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:16:07.677 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 359], 00:16:07.677 | 99.99th=[ 453] 00:16:07.677 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:07.677 slat (usec): min=18, max=108, avg=23.55, stdev= 8.01 00:16:07.677 clat (usec): min=120, max=1202, avg=177.67, stdev=37.18 00:16:07.677 lat (usec): min=140, max=1234, avg=201.22, stdev=38.69 00:16:07.677 clat percentiles (usec): 00:16:07.677 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:16:07.677 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:16:07.677 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 219], 00:16:07.677 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 578], 99.95th=[ 1139], 00:16:07.677 | 99.99th=[ 1205] 00:16:07.677 bw ( KiB/s): min=10840, max=10840, per=33.33%, avg=10840.00, stdev= 0.00, samples=1 00:16:07.677 iops : min= 2710, max= 2710, avg=2710.00, stdev= 0.00, samples=1 00:16:07.677 lat (usec) : 250=97.94%, 500=2.00%, 750=0.02% 00:16:07.677 lat (msec) : 2=0.04% 00:16:07.677 cpu : usr=1.60%, sys=6.80%, ctx=4710, majf=0, minf=7 00:16:07.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.677 issued rwts: total=2150,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.677 job2: (groupid=0, jobs=1): err= 0: pid=87418: Tue Dec 3 00:50:19 2024 00:16:07.677 read: IOPS=1086, BW=4348KiB/s (4452kB/s)(4352KiB/1001msec) 00:16:07.677 slat (nsec): min=15804, max=89325, avg=24279.48, stdev=10450.87 00:16:07.677 clat (usec): min=188, max=1957, avg=390.29, stdev=79.58 00:16:07.677 lat (usec): min=205, max=1997, avg=414.57, stdev=80.11 00:16:07.677 clat percentiles (usec): 00:16:07.677 | 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 347], 00:16:07.677 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 396], 00:16:07.677 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 474], 00:16:07.677 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 1500], 99.95th=[ 1958], 00:16:07.677 | 99.99th=[ 1958] 00:16:07.677 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:07.677 slat (nsec): min=19449, max=98811, avg=41655.17, stdev=10129.39 00:16:07.677 clat (usec): min=157, max=2953, avg=311.01, stdev=90.69 00:16:07.677 lat (usec): min=192, max=2992, avg=352.66, stdev=90.77 00:16:07.678 clat percentiles (usec): 00:16:07.678 | 1.00th=[ 204], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 262], 00:16:07.678 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 310], 00:16:07.678 | 70.00th=[ 334], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 420], 00:16:07.678 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 693], 99.95th=[ 2966], 00:16:07.678 | 99.99th=[ 2966] 00:16:07.678 bw ( KiB/s): min= 6264, max= 6264, per=19.26%, avg=6264.00, stdev= 0.00, samples=1 00:16:07.678 iops : min= 1566, max= 1566, avg=1566.00, stdev= 0.00, samples=1 00:16:07.678 lat (usec) : 250=8.00%, 500=90.70%, 750=1.14%, 1000=0.04% 00:16:07.678 lat (msec) : 2=0.08%, 4=0.04% 00:16:07.678 cpu : usr=1.40%, sys=7.10%, ctx=2625, majf=0, minf=16 00:16:07.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.678 issued rwts: total=1088,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.678 job3: (groupid=0, jobs=1): err= 0: pid=87419: Tue Dec 3 00:50:19 2024 00:16:07.678 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:07.678 slat (nsec): min=12808, max=70993, avg=18649.71, stdev=7465.35 00:16:07.678 clat (usec): min=145, max=579, avg=209.24, stdev=32.30 00:16:07.678 lat (usec): min=158, max=603, avg=227.89, stdev=34.15 00:16:07.678 clat percentiles (usec): 00:16:07.678 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 184], 00:16:07.678 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:16:07.678 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 265], 00:16:07.678 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 371], 99.95th=[ 461], 00:16:07.678 | 99.99th=[ 578] 00:16:07.678 write: IOPS=2503, BW=9.78MiB/s (10.3MB/s)(9.79MiB/1001msec); 0 zone resets 00:16:07.678 slat (usec): min=17, max=111, avg=28.07, stdev=10.22 00:16:07.678 clat (usec): min=117, max=603, avg=181.35, stdev=33.01 00:16:07.678 lat (usec): min=136, max=622, avg=209.41, stdev=34.90 00:16:07.678 clat percentiles (usec): 00:16:07.678 | 1.00th=[ 131], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:16:07.678 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 184], 00:16:07.678 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 225], 95.00th=[ 243], 00:16:07.678 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 326], 00:16:07.678 | 99.99th=[ 603] 00:16:07.678 bw ( KiB/s): min=10576, max=10576, per=32.52%, avg=10576.00, stdev= 0.00, samples=1 00:16:07.678 iops : min= 2644, max= 2644, avg=2644.00, stdev= 0.00, samples=1 00:16:07.678 lat (usec) : 250=93.70%, 500=6.26%, 750=0.04% 00:16:07.678 cpu : usr=1.30%, sys=8.50%, ctx=4555, majf=0, minf=9 00:16:07.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.678 issued rwts: total=2048,2506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.678 00:16:07.678 Run status group 0 (all jobs): 00:16:07.678 READ: bw=25.0MiB/s (26.2MB/s), 4348KiB/s-8591KiB/s (4452kB/s-8798kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:16:07.678 WRITE: bw=31.8MiB/s (33.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.8MiB (33.3MB), run=1001-1001msec 00:16:07.678 00:16:07.678 Disk stats (read/write): 00:16:07.678 nvme0n1: ios=1074/1280, merge=0/0, ticks=424/403, in_queue=827, util=89.28% 00:16:07.678 nvme0n2: ios=2097/2054, merge=0/0, ticks=469/380, in_queue=849, util=90.11% 00:16:07.678 nvme0n3: ios=1060/1260, merge=0/0, ticks=471/407, in_queue=878, util=90.69% 00:16:07.678 nvme0n4: ios=1969/2048, merge=0/0, ticks=485/383, in_queue=868, util=90.85% 00:16:07.678 00:50:19 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:07.678 [global] 00:16:07.678 thread=1 00:16:07.678 invalidate=1 00:16:07.678 rw=write 00:16:07.678 time_based=1 00:16:07.678 runtime=1 00:16:07.678 ioengine=libaio 00:16:07.678 direct=1 00:16:07.678 bs=4096 00:16:07.678 iodepth=128 00:16:07.678 norandommap=0 00:16:07.678 numjobs=1 00:16:07.678 00:16:07.678 verify_dump=1 00:16:07.678 verify_backlog=512 00:16:07.678 verify_state_save=0 00:16:07.678 do_verify=1 00:16:07.678 verify=crc32c-intel 00:16:07.678 [job0] 00:16:07.678 filename=/dev/nvme0n1 00:16:07.678 [job1] 00:16:07.678 filename=/dev/nvme0n2 00:16:07.678 [job2] 00:16:07.678 filename=/dev/nvme0n3 00:16:07.678 [job3] 00:16:07.678 filename=/dev/nvme0n4 00:16:07.678 Could not set queue depth (nvme0n1) 00:16:07.678 Could not set queue depth (nvme0n2) 00:16:07.678 Could not set queue depth (nvme0n3) 00:16:07.678 Could not set queue depth (nvme0n4) 00:16:07.678 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.678 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.678 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.678 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.678 fio-3.35 00:16:07.678 Starting 4 threads 00:16:09.053 00:16:09.053 job0: (groupid=0, jobs=1): err= 0: pid=87478: Tue Dec 3 00:50:21 2024 00:16:09.053 read: IOPS=2403, BW=9614KiB/s (9844kB/s)(9652KiB/1004msec) 00:16:09.053 slat (usec): min=3, max=18705, avg=199.74, stdev=1062.37 00:16:09.053 clat (usec): min=1849, max=42856, avg=24833.70, stdev=4268.24 00:16:09.053 lat (usec): min=6594, max=42890, avg=25033.44, stdev=4339.06 00:16:09.053 clat percentiles (usec): 00:16:09.054 | 1.00th=[11731], 5.00th=[17957], 10.00th=[20055], 20.00th=[22676], 00:16:09.054 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[24773], 00:16:09.054 | 70.00th=[25560], 80.00th=[27657], 90.00th=[30540], 95.00th=[32637], 00:16:09.054 | 99.00th=[34866], 99.50th=[36439], 99.90th=[38011], 99.95th=[42206], 00:16:09.054 | 99.99th=[42730] 00:16:09.054 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:16:09.054 slat (usec): min=5, max=12690, avg=195.32, stdev=1137.41 00:16:09.054 clat (usec): min=14137, max=43484, avg=26053.58, stdev=3371.42 00:16:09.054 lat (usec): min=14211, max=43565, avg=26248.91, stdev=3533.25 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[18482], 5.00th=[20055], 10.00th=[22676], 20.00th=[23987], 00:16:09.054 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26608], 00:16:09.054 | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30540], 00:16:09.054 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:16:09.054 | 99.99th=[43254] 00:16:09.054 bw ( KiB/s): min= 9552, max=10884, per=20.10%, avg=10218.00, stdev=941.87, samples=2 00:16:09.054 iops : min= 2388, max= 2721, avg=2554.50, stdev=235.47, samples=2 00:16:09.054 lat (msec) : 2=0.02%, 10=0.24%, 20=5.93%, 50=93.81% 00:16:09.054 cpu : usr=2.79%, sys=6.58%, ctx=573, majf=0, minf=13 00:16:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.054 issued rwts: total=2413,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.054 job1: (groupid=0, jobs=1): err= 0: pid=87479: Tue Dec 3 00:50:21 2024 00:16:09.054 read: IOPS=2328, BW=9315KiB/s (9538kB/s)(9380KiB/1007msec) 00:16:09.054 slat (usec): min=3, max=11420, avg=202.76, stdev=1027.35 00:16:09.054 clat (usec): min=2106, max=39308, avg=25288.56, stdev=4738.33 00:16:09.054 lat (usec): min=6094, max=47366, avg=25491.32, stdev=4804.56 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[ 6718], 5.00th=[18220], 10.00th=[21365], 20.00th=[23200], 00:16:09.054 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24773], 60.00th=[25297], 00:16:09.054 | 70.00th=[26084], 80.00th=[29230], 90.00th=[31327], 95.00th=[33817], 00:16:09.054 | 99.00th=[36963], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:16:09.054 | 99.99th=[39060] 00:16:09.054 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:16:09.054 slat (usec): min=5, max=13061, avg=198.61, stdev=1138.47 00:16:09.054 clat (usec): min=15817, max=42485, avg=26402.88, stdev=3353.84 00:16:09.054 lat (usec): min=15840, max=42498, avg=26601.49, stdev=3497.27 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[16712], 5.00th=[21103], 10.00th=[22938], 20.00th=[24249], 00:16:09.054 | 30.00th=[24773], 40.00th=[25560], 50.00th=[26346], 60.00th=[27132], 00:16:09.054 | 70.00th=[27919], 80.00th=[28705], 90.00th=[29754], 95.00th=[32113], 00:16:09.054 | 99.00th=[36439], 99.50th=[38011], 99.90th=[42206], 99.95th=[42730], 00:16:09.054 | 99.99th=[42730] 00:16:09.054 bw ( KiB/s): min= 9672, max=10808, per=20.14%, avg=10240.00, stdev=803.27, samples=2 00:16:09.054 iops : min= 2418, max= 2702, avg=2560.00, stdev=200.82, samples=2 00:16:09.054 lat (msec) : 4=0.02%, 10=0.90%, 20=4.38%, 50=94.70% 00:16:09.054 cpu : usr=2.78%, sys=6.76%, ctx=550, majf=0, minf=10 00:16:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.054 issued rwts: total=2345,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.054 job2: (groupid=0, jobs=1): err= 0: pid=87480: Tue Dec 3 00:50:21 2024 00:16:09.054 read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1002msec) 00:16:09.054 slat (usec): min=6, max=4926, avg=115.79, stdev=602.50 00:16:09.054 clat (usec): min=504, max=20282, avg=15394.00, stdev=1798.76 00:16:09.054 lat (usec): min=4460, max=24997, avg=15509.79, stdev=1817.18 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[ 5342], 5.00th=[12256], 10.00th=[13960], 20.00th=[14746], 00:16:09.054 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:16:09.054 | 70.00th=[16057], 80.00th=[16581], 90.00th=[16909], 95.00th=[17695], 00:16:09.054 | 99.00th=[18744], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:16:09.054 | 99.99th=[20317] 00:16:09.054 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:16:09.054 slat (usec): min=13, max=5560, avg=122.72, stdev=644.08 00:16:09.054 clat (usec): min=10633, max=18874, avg=15794.26, stdev=2010.81 00:16:09.054 lat (usec): min=10666, max=18899, avg=15916.98, stdev=1954.24 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12387], 20.00th=[13435], 00:16:09.054 | 30.00th=[15401], 40.00th=[16057], 50.00th=[16450], 60.00th=[16712], 00:16:09.054 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:16:09.054 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:16:09.054 | 99.99th=[19006] 00:16:09.054 bw ( KiB/s): min=16384, max=16416, per=32.26%, avg=16400.00, stdev=22.63, samples=2 00:16:09.054 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:16:09.054 lat (usec) : 750=0.01% 00:16:09.054 lat (msec) : 10=0.60%, 20=99.31%, 50=0.07% 00:16:09.054 cpu : usr=3.40%, sys=12.99%, ctx=393, majf=0, minf=9 00:16:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.054 issued rwts: total=4008,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.054 job3: (groupid=0, jobs=1): err= 0: pid=87481: Tue Dec 3 00:50:21 2024 00:16:09.054 read: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1005msec) 00:16:09.054 slat (usec): min=10, max=8829, avg=142.30, stdev=774.67 00:16:09.054 clat (usec): min=4154, max=27203, avg=18568.45, stdev=2538.04 00:16:09.054 lat (usec): min=4168, max=28711, avg=18710.75, stdev=2581.74 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[ 7046], 5.00th=[15270], 10.00th=[16319], 20.00th=[17433], 00:16:09.054 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:16:09.054 | 70.00th=[19268], 80.00th=[20317], 90.00th=[21103], 95.00th=[22414], 00:16:09.054 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26346], 99.95th=[26346], 00:16:09.054 | 99.99th=[27132] 00:16:09.054 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:16:09.054 slat (usec): min=14, max=9006, avg=144.87, stdev=827.66 00:16:09.054 clat (usec): min=10708, max=29093, avg=19093.31, stdev=2228.36 00:16:09.054 lat (usec): min=10732, max=29130, avg=19238.18, stdev=2194.48 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[11731], 5.00th=[13042], 10.00th=[17171], 20.00th=[18482], 00:16:09.054 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:16:09.054 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[21365], 00:16:09.054 | 99.00th=[26084], 99.50th=[27132], 99.90th=[28967], 99.95th=[28967], 00:16:09.054 | 99.99th=[28967] 00:16:09.054 bw ( KiB/s): min=13250, max=14992, per=27.77%, avg=14121.00, stdev=1231.78, samples=2 00:16:09.054 iops : min= 3312, max= 3748, avg=3530.00, stdev=308.30, samples=2 00:16:09.054 lat (msec) : 10=0.70%, 20=74.28%, 50=25.02% 00:16:09.054 cpu : usr=4.08%, sys=13.05%, ctx=253, majf=0, minf=13 00:16:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.054 issued rwts: total=3142,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.054 00:16:09.054 Run status group 0 (all jobs): 00:16:09.054 READ: bw=46.2MiB/s (48.4MB/s), 9315KiB/s-15.6MiB/s (9538kB/s-16.4MB/s), io=46.5MiB (48.8MB), run=1002-1007msec 00:16:09.054 WRITE: bw=49.7MiB/s (52.1MB/s), 9.93MiB/s-16.0MiB/s (10.4MB/s-16.7MB/s), io=50.0MiB (52.4MB), run=1002-1007msec 00:16:09.054 00:16:09.054 Disk stats (read/write): 00:16:09.054 nvme0n1: ios=2098/2179, merge=0/0, ticks=24356/25415, in_queue=49771, util=87.20% 00:16:09.054 nvme0n2: ios=2097/2132, merge=0/0, ticks=24986/25568, in_queue=50554, util=88.08% 00:16:09.054 nvme0n3: ios=3392/3584, merge=0/0, ticks=15979/16250, in_queue=32229, util=89.90% 00:16:09.054 nvme0n4: ios=2640/3072, merge=0/0, ticks=23118/25263, in_queue=48381, util=89.59% 00:16:09.054 00:50:21 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:09.054 [global] 00:16:09.054 thread=1 00:16:09.054 invalidate=1 00:16:09.054 rw=randwrite 00:16:09.054 time_based=1 00:16:09.054 runtime=1 00:16:09.054 ioengine=libaio 00:16:09.054 direct=1 00:16:09.054 bs=4096 00:16:09.054 iodepth=128 00:16:09.054 norandommap=0 00:16:09.054 numjobs=1 00:16:09.054 00:16:09.054 verify_dump=1 00:16:09.054 verify_backlog=512 00:16:09.054 verify_state_save=0 00:16:09.054 do_verify=1 00:16:09.054 verify=crc32c-intel 00:16:09.054 [job0] 00:16:09.054 filename=/dev/nvme0n1 00:16:09.054 [job1] 00:16:09.054 filename=/dev/nvme0n2 00:16:09.054 [job2] 00:16:09.054 filename=/dev/nvme0n3 00:16:09.054 [job3] 00:16:09.054 filename=/dev/nvme0n4 00:16:09.054 Could not set queue depth (nvme0n1) 00:16:09.054 Could not set queue depth (nvme0n2) 00:16:09.054 Could not set queue depth (nvme0n3) 00:16:09.054 Could not set queue depth (nvme0n4) 00:16:09.054 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.054 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.054 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.054 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.054 fio-3.35 00:16:09.054 Starting 4 threads 00:16:10.435 00:16:10.435 job0: (groupid=0, jobs=1): err= 0: pid=87535: Tue Dec 3 00:50:22 2024 00:16:10.435 read: IOPS=4385, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1006msec) 00:16:10.435 slat (usec): min=11, max=10644, avg=102.81, stdev=705.22 00:16:10.435 clat (usec): min=5108, max=23503, avg=14331.28, stdev=2736.00 00:16:10.435 lat (usec): min=5144, max=23535, avg=14434.09, stdev=2768.60 00:16:10.435 clat percentiles (usec): 00:16:10.435 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10683], 20.00th=[11863], 00:16:10.435 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14222], 60.00th=[15008], 00:16:10.435 | 70.00th=[15795], 80.00th=[16909], 90.00th=[17957], 95.00th=[19268], 00:16:10.435 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20317], 99.95th=[22938], 00:16:10.435 | 99.99th=[23462] 00:16:10.435 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:16:10.435 slat (usec): min=7, max=12242, avg=109.91, stdev=876.96 00:16:10.435 clat (usec): min=5638, max=26951, avg=13927.15, stdev=1767.10 00:16:10.435 lat (usec): min=5666, max=27016, avg=14037.06, stdev=1963.39 00:16:10.435 clat percentiles (usec): 00:16:10.435 | 1.00th=[ 7111], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:16:10.435 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:16:10.435 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15664], 95.00th=[16057], 00:16:10.435 | 99.00th=[17695], 99.50th=[22938], 99.90th=[25560], 99.95th=[26346], 00:16:10.435 | 99.99th=[26870] 00:16:10.435 bw ( KiB/s): min=17744, max=19120, per=36.27%, avg=18432.00, stdev=972.98, samples=2 00:16:10.435 iops : min= 4436, max= 4780, avg=4608.00, stdev=243.24, samples=2 00:16:10.435 lat (msec) : 10=3.46%, 20=95.81%, 50=0.73% 00:16:10.435 cpu : usr=4.28%, sys=15.52%, ctx=260, majf=0, minf=9 00:16:10.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:10.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.435 issued rwts: total=4412,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.435 job1: (groupid=0, jobs=1): err= 0: pid=87537: Tue Dec 3 00:50:22 2024 00:16:10.435 read: IOPS=1043, BW=4174KiB/s (4274kB/s)(4228KiB/1013msec) 00:16:10.435 slat (usec): min=6, max=46149, avg=496.79, stdev=3008.53 00:16:10.435 clat (msec): min=4, max=176, avg=69.37, stdev=48.51 00:16:10.435 lat (msec): min=12, max=176, avg=69.87, stdev=48.72 00:16:10.435 clat percentiles (msec): 00:16:10.435 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 33], 00:16:10.435 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 44], 60.00th=[ 72], 00:16:10.435 | 70.00th=[ 86], 80.00th=[ 125], 90.00th=[ 153], 95.00th=[ 163], 00:16:10.435 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 178], 99.95th=[ 178], 00:16:10.435 | 99.99th=[ 178] 00:16:10.435 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:16:10.435 slat (usec): min=13, max=35995, avg=306.45, stdev=2117.14 00:16:10.435 clat (msec): min=17, max=110, avg=33.68, stdev=21.13 00:16:10.435 lat (msec): min=17, max=110, avg=33.99, stdev=21.28 00:16:10.435 clat percentiles (msec): 00:16:10.435 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 22], 20.00th=[ 23], 00:16:10.435 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:16:10.435 | 70.00th=[ 28], 80.00th=[ 33], 90.00th=[ 67], 95.00th=[ 91], 00:16:10.435 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:16:10.435 | 99.99th=[ 111] 00:16:10.435 bw ( KiB/s): min= 4096, max= 7432, per=11.34%, avg=5764.00, stdev=2358.91, samples=2 00:16:10.435 iops : min= 1024, max= 1858, avg=1441.00, stdev=589.73, samples=2 00:16:10.435 lat (msec) : 10=0.04%, 20=3.39%, 50=70.00%, 100=13.73%, 250=12.84% 00:16:10.435 cpu : usr=1.38%, sys=4.05%, ctx=106, majf=0, minf=15 00:16:10.435 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:16:10.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.435 issued rwts: total=1057,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.435 job2: (groupid=0, jobs=1): err= 0: pid=87543: Tue Dec 3 00:50:22 2024 00:16:10.435 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:16:10.435 slat (usec): min=5, max=15881, avg=157.83, stdev=1033.94 00:16:10.435 clat (usec): min=7767, max=45905, avg=20045.92, stdev=5429.46 00:16:10.435 lat (usec): min=7781, max=45930, avg=20203.76, stdev=5493.95 00:16:10.435 clat percentiles (usec): 00:16:10.435 | 1.00th=[12125], 5.00th=[14746], 10.00th=[15926], 20.00th=[16581], 00:16:10.435 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18744], 60.00th=[19530], 00:16:10.435 | 70.00th=[20841], 80.00th=[21890], 90.00th=[24511], 95.00th=[31065], 00:16:10.435 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:16:10.435 | 99.99th=[45876] 00:16:10.435 write: IOPS=2944, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1021msec); 0 zone resets 00:16:10.435 slat (usec): min=5, max=16707, avg=192.36, stdev=1087.70 00:16:10.435 clat (usec): min=6122, max=61564, avg=25998.72, stdev=11688.57 00:16:10.435 lat (usec): min=6149, max=61572, avg=26191.08, stdev=11783.65 00:16:10.435 clat percentiles (usec): 00:16:10.435 | 1.00th=[ 8586], 5.00th=[12911], 10.00th=[15401], 20.00th=[16909], 00:16:10.435 | 30.00th=[17695], 40.00th=[18482], 50.00th=[19530], 60.00th=[27657], 00:16:10.435 | 70.00th=[32375], 80.00th=[37487], 90.00th=[45351], 95.00th=[47449], 00:16:10.435 | 99.00th=[50594], 99.50th=[55837], 99.90th=[61604], 99.95th=[61604], 00:16:10.435 | 99.99th=[61604] 00:16:10.435 bw ( KiB/s): min=10736, max=12288, per=22.65%, avg=11512.00, stdev=1097.43, samples=2 00:16:10.435 iops : min= 2684, max= 3072, avg=2878.00, stdev=274.36, samples=2 00:16:10.435 lat (msec) : 10=1.29%, 20=56.00%, 50=41.92%, 100=0.79% 00:16:10.435 cpu : usr=2.55%, sys=7.55%, ctx=309, majf=0, minf=12 00:16:10.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:10.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.435 issued rwts: total=2560,3006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.435 job3: (groupid=0, jobs=1): err= 0: pid=87545: Tue Dec 3 00:50:22 2024 00:16:10.435 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:16:10.435 slat (usec): min=5, max=15027, avg=139.71, stdev=947.90 00:16:10.435 clat (usec): min=5779, max=35058, avg=18121.23, stdev=4177.22 00:16:10.435 lat (usec): min=5794, max=35067, avg=18260.95, stdev=4228.07 00:16:10.435 clat percentiles (usec): 00:16:10.435 | 1.00th=[10159], 5.00th=[13173], 10.00th=[13829], 20.00th=[15008], 00:16:10.435 | 30.00th=[15795], 40.00th=[16581], 50.00th=[17171], 60.00th=[17695], 00:16:10.435 | 70.00th=[19530], 80.00th=[21103], 90.00th=[22938], 95.00th=[26084], 00:16:10.435 | 99.00th=[32375], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:16:10.436 | 99.99th=[34866] 00:16:10.436 write: IOPS=3765, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1015msec); 0 zone resets 00:16:10.436 slat (usec): min=5, max=14985, avg=121.62, stdev=858.74 00:16:10.436 clat (usec): min=856, max=35034, avg=16640.81, stdev=3934.10 00:16:10.436 lat (usec): min=885, max=35046, avg=16762.43, stdev=4029.87 00:16:10.436 clat percentiles (usec): 00:16:10.436 | 1.00th=[ 4817], 5.00th=[ 8848], 10.00th=[10814], 20.00th=[14484], 00:16:10.436 | 30.00th=[15926], 40.00th=[16909], 50.00th=[17695], 60.00th=[17957], 00:16:10.436 | 70.00th=[18220], 80.00th=[19006], 90.00th=[19530], 95.00th=[19792], 00:16:10.436 | 99.00th=[30016], 99.50th=[31065], 99.90th=[32900], 99.95th=[33817], 00:16:10.436 | 99.99th=[34866] 00:16:10.436 bw ( KiB/s): min=13184, max=16376, per=29.08%, avg=14780.00, stdev=2257.08, samples=2 00:16:10.436 iops : min= 3296, max= 4094, avg=3695.00, stdev=564.27, samples=2 00:16:10.436 lat (usec) : 1000=0.14% 00:16:10.436 lat (msec) : 10=4.19%, 20=79.14%, 50=16.54% 00:16:10.436 cpu : usr=3.45%, sys=10.16%, ctx=350, majf=0, minf=13 00:16:10.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:10.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.436 issued rwts: total=3584,3822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.436 00:16:10.436 Run status group 0 (all jobs): 00:16:10.436 READ: bw=44.4MiB/s (46.6MB/s), 4174KiB/s-17.1MiB/s (4274kB/s-18.0MB/s), io=45.4MiB (47.6MB), run=1006-1021msec 00:16:10.436 WRITE: bw=49.6MiB/s (52.0MB/s), 6065KiB/s-17.9MiB/s (6211kB/s-18.8MB/s), io=50.7MiB (53.1MB), run=1006-1021msec 00:16:10.436 00:16:10.436 Disk stats (read/write): 00:16:10.436 nvme0n1: ios=3633/4096, merge=0/0, ticks=47254/51601, in_queue=98855, util=87.59% 00:16:10.436 nvme0n2: ios=963/1024, merge=0/0, ticks=15469/10528, in_queue=25997, util=88.19% 00:16:10.436 nvme0n3: ios=2300/2560, merge=0/0, ticks=44575/59293, in_queue=103868, util=89.21% 00:16:10.436 nvme0n4: ios=3033/3114, merge=0/0, ticks=52596/49662, in_queue=102258, util=89.56% 00:16:10.436 00:50:22 -- target/fio.sh@55 -- # sync 00:16:10.436 00:50:22 -- target/fio.sh@59 -- # fio_pid=87561 00:16:10.436 00:50:22 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:10.436 00:50:22 -- target/fio.sh@61 -- # sleep 3 00:16:10.436 [global] 00:16:10.436 thread=1 00:16:10.436 invalidate=1 00:16:10.436 rw=read 00:16:10.436 time_based=1 00:16:10.436 runtime=10 00:16:10.436 ioengine=libaio 00:16:10.436 direct=1 00:16:10.436 bs=4096 00:16:10.436 iodepth=1 00:16:10.436 norandommap=1 00:16:10.436 numjobs=1 00:16:10.436 00:16:10.436 [job0] 00:16:10.436 filename=/dev/nvme0n1 00:16:10.436 [job1] 00:16:10.436 filename=/dev/nvme0n2 00:16:10.436 [job2] 00:16:10.436 filename=/dev/nvme0n3 00:16:10.436 [job3] 00:16:10.436 filename=/dev/nvme0n4 00:16:10.436 Could not set queue depth (nvme0n1) 00:16:10.436 Could not set queue depth (nvme0n2) 00:16:10.436 Could not set queue depth (nvme0n3) 00:16:10.436 Could not set queue depth (nvme0n4) 00:16:10.695 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.695 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.695 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.695 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.695 fio-3.35 00:16:10.695 Starting 4 threads 00:16:13.983 00:50:25 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:13.983 fio: pid=87604, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.983 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45785088, buflen=4096 00:16:13.983 00:50:26 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:13.983 fio: pid=87603, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.983 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40611840, buflen=4096 00:16:13.983 00:50:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.983 00:50:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:14.241 fio: pid=87601, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:14.241 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57466880, buflen=4096 00:16:14.241 00:50:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.241 00:50:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:14.500 fio: pid=87602, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:14.500 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49745920, buflen=4096 00:16:14.500 00:16:14.500 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87601: Tue Dec 3 00:50:26 2024 00:16:14.500 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(54.8MiB/3439msec) 00:16:14.500 slat (usec): min=6, max=12523, avg=17.03, stdev=134.29 00:16:14.500 clat (usec): min=135, max=4183, avg=226.81, stdev=74.53 00:16:14.500 lat (usec): min=153, max=12746, avg=243.84, stdev=154.25 00:16:14.500 clat percentiles (usec): 00:16:14.500 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 192], 00:16:14.500 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 227], 00:16:14.500 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 314], 00:16:14.500 | 99.00th=[ 383], 99.50th=[ 416], 99.90th=[ 725], 99.95th=[ 1745], 00:16:14.500 | 99.99th=[ 3097] 00:16:14.500 bw ( KiB/s): min=16256, max=17384, per=33.02%, avg=16901.00, stdev=492.62, samples=6 00:16:14.500 iops : min= 4064, max= 4346, avg=4225.17, stdev=123.21, samples=6 00:16:14.500 lat (usec) : 250=81.48%, 500=18.34%, 750=0.08%, 1000=0.01% 00:16:14.500 lat (msec) : 2=0.04%, 4=0.04%, 10=0.01% 00:16:14.500 cpu : usr=0.79%, sys=5.18%, ctx=14312, majf=0, minf=1 00:16:14.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.500 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.500 issued rwts: total=14031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.500 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.500 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87602: Tue Dec 3 00:50:26 2024 00:16:14.500 read: IOPS=3288, BW=12.8MiB/s (13.5MB/s)(47.4MiB/3694msec) 00:16:14.500 slat (usec): min=6, max=12129, avg=18.70, stdev=196.30 00:16:14.500 clat (usec): min=45, max=82095, avg=284.14, stdev=771.38 00:16:14.500 lat (usec): min=142, max=82108, avg=302.84, stdev=820.55 00:16:14.500 clat percentiles (usec): 00:16:14.500 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 202], 00:16:14.500 | 30.00th=[ 227], 40.00th=[ 265], 50.00th=[ 289], 60.00th=[ 302], 00:16:14.500 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 367], 00:16:14.500 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 914], 99.95th=[ 1926], 00:16:14.500 | 99.99th=[21365] 00:16:14.500 bw ( KiB/s): min=11896, max=17045, per=25.46%, avg=13030.57, stdev=1800.49, samples=7 00:16:14.500 iops : min= 2974, max= 4261, avg=3257.57, stdev=450.03, samples=7 00:16:14.500 lat (usec) : 50=0.01%, 100=0.01%, 250=36.28%, 500=63.43%, 750=0.16% 00:16:14.500 lat (usec) : 1000=0.02% 00:16:14.500 lat (msec) : 2=0.03%, 4=0.03%, 50=0.01%, 100=0.01% 00:16:14.500 cpu : usr=0.81%, sys=4.01%, ctx=12340, majf=0, minf=2 00:16:14.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.501 issued rwts: total=12146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.501 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87603: Tue Dec 3 00:50:26 2024 00:16:14.501 read: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(38.7MiB/3215msec) 00:16:14.501 slat (usec): min=6, max=7706, avg=16.01, stdev=108.35 00:16:14.501 clat (usec): min=115, max=82385, avg=306.85, stdev=830.76 00:16:14.501 lat (usec): min=166, max=82399, avg=322.86, stdev=837.77 00:16:14.501 clat percentiles (usec): 00:16:14.501 | 1.00th=[ 172], 5.00th=[ 198], 10.00th=[ 210], 20.00th=[ 241], 00:16:14.501 | 30.00th=[ 273], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 314], 00:16:14.501 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:16:14.501 | 99.00th=[ 437], 99.50th=[ 486], 99.90th=[ 1549], 99.95th=[ 2704], 00:16:14.501 | 99.99th=[82314] 00:16:14.501 bw ( KiB/s): min= 9104, max=16510, per=24.32%, avg=12446.33, stdev=2358.80, samples=6 00:16:14.501 iops : min= 2276, max= 4127, avg=3111.50, stdev=589.53, samples=6 00:16:14.501 lat (usec) : 250=22.38%, 500=77.22%, 750=0.25%, 1000=0.04% 00:16:14.501 lat (msec) : 2=0.03%, 4=0.05%, 10=0.01%, 100=0.01% 00:16:14.501 cpu : usr=0.81%, sys=3.70%, ctx=10056, majf=0, minf=1 00:16:14.501 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.501 issued rwts: total=9916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.501 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87604: Tue Dec 3 00:50:26 2024 00:16:14.501 read: IOPS=3815, BW=14.9MiB/s (15.6MB/s)(43.7MiB/2930msec) 00:16:14.501 slat (nsec): min=10358, max=65338, avg=15971.12, stdev=4506.01 00:16:14.501 clat (usec): min=157, max=851, avg=244.42, stdev=41.82 00:16:14.501 lat (usec): min=172, max=881, avg=260.39, stdev=41.62 00:16:14.501 clat percentiles (usec): 00:16:14.501 | 1.00th=[ 178], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 219], 00:16:14.501 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:16:14.501 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 302], 00:16:14.501 | 99.00th=[ 416], 99.50th=[ 461], 99.90th=[ 586], 99.95th=[ 603], 00:16:14.501 | 99.99th=[ 824] 00:16:14.501 bw ( KiB/s): min=15064, max=16686, per=30.46%, avg=15590.00, stdev=670.50, samples=5 00:16:14.501 iops : min= 3766, max= 4171, avg=3897.40, stdev=167.42, samples=5 00:16:14.501 lat (usec) : 250=67.55%, 500=32.09%, 750=0.34%, 1000=0.02% 00:16:14.501 cpu : usr=0.75%, sys=5.02%, ctx=11181, majf=0, minf=2 00:16:14.501 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.501 issued rwts: total=11179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.501 00:16:14.501 Run status group 0 (all jobs): 00:16:14.501 READ: bw=50.0MiB/s (52.4MB/s), 12.0MiB/s-15.9MiB/s (12.6MB/s-16.7MB/s), io=185MiB (194MB), run=2930-3694msec 00:16:14.501 00:16:14.501 Disk stats (read/write): 00:16:14.501 nvme0n1: ios=13778/0, merge=0/0, ticks=3160/0, in_queue=3160, util=95.56% 00:16:14.501 nvme0n2: ios=11763/0, merge=0/0, ticks=3392/0, in_queue=3392, util=95.50% 00:16:14.501 nvme0n3: ios=9634/0, merge=0/0, ticks=2909/0, in_queue=2909, util=96.15% 00:16:14.501 nvme0n4: ios=11036/0, merge=0/0, ticks=2722/0, in_queue=2722, util=96.76% 00:16:14.501 00:50:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.501 00:50:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:14.760 00:50:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.760 00:50:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:15.018 00:50:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.018 00:50:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:15.276 00:50:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.276 00:50:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:15.276 00:50:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.276 00:50:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:15.534 00:50:28 -- target/fio.sh@69 -- # fio_status=0 00:16:15.534 00:50:28 -- target/fio.sh@70 -- # wait 87561 00:16:15.534 00:50:28 -- target/fio.sh@70 -- # fio_status=4 00:16:15.534 00:50:28 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.534 00:50:28 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.534 00:50:28 -- common/autotest_common.sh@1208 -- # local i=0 00:16:15.534 00:50:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.534 00:50:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:15.793 00:50:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:15.793 00:50:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.793 nvmf hotplug test: fio failed as expected 00:16:15.793 00:50:28 -- common/autotest_common.sh@1220 -- # return 0 00:16:15.793 00:50:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:15.793 00:50:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:15.793 00:50:28 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.051 00:50:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:16.051 00:50:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:16.051 00:50:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:16.051 00:50:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:16.051 00:50:28 -- target/fio.sh@91 -- # nvmftestfini 00:16:16.051 00:50:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.051 00:50:28 -- nvmf/common.sh@116 -- # sync 00:16:16.051 00:50:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:16.051 00:50:28 -- nvmf/common.sh@119 -- # set +e 00:16:16.051 00:50:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.051 00:50:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:16.051 rmmod nvme_tcp 00:16:16.051 rmmod nvme_fabrics 00:16:16.051 rmmod nvme_keyring 00:16:16.051 00:50:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.051 00:50:28 -- nvmf/common.sh@123 -- # set -e 00:16:16.051 00:50:28 -- nvmf/common.sh@124 -- # return 0 00:16:16.051 00:50:28 -- nvmf/common.sh@477 -- # '[' -n 87061 ']' 00:16:16.051 00:50:28 -- nvmf/common.sh@478 -- # killprocess 87061 00:16:16.051 00:50:28 -- common/autotest_common.sh@936 -- # '[' -z 87061 ']' 00:16:16.051 00:50:28 -- common/autotest_common.sh@940 -- # kill -0 87061 00:16:16.051 00:50:28 -- common/autotest_common.sh@941 -- # uname 00:16:16.051 00:50:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.051 00:50:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87061 00:16:16.051 killing process with pid 87061 00:16:16.051 00:50:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.051 00:50:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.051 00:50:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87061' 00:16:16.051 00:50:28 -- common/autotest_common.sh@955 -- # kill 87061 00:16:16.051 00:50:28 -- common/autotest_common.sh@960 -- # wait 87061 00:16:16.310 00:50:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.310 00:50:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:16.310 00:50:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:16.310 00:50:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.310 00:50:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:16.310 00:50:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.310 00:50:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.310 00:50:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.310 00:50:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:16.310 00:16:16.310 real 0m19.639s 00:16:16.310 user 1m14.807s 00:16:16.310 sys 0m8.420s 00:16:16.310 00:50:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.310 00:50:28 -- common/autotest_common.sh@10 -- # set +x 00:16:16.310 ************************************ 00:16:16.310 END TEST nvmf_fio_target 00:16:16.310 ************************************ 00:16:16.570 00:50:28 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.570 00:50:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.570 00:50:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.570 00:50:28 -- common/autotest_common.sh@10 -- # set +x 00:16:16.570 ************************************ 00:16:16.570 START TEST nvmf_bdevio 00:16:16.570 ************************************ 00:16:16.570 00:50:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.570 * Looking for test storage... 00:16:16.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.570 00:50:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:16.570 00:50:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:16.570 00:50:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:16.570 00:50:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:16.570 00:50:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:16.570 00:50:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:16.570 00:50:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:16.570 00:50:28 -- scripts/common.sh@335 -- # IFS=.-: 00:16:16.570 00:50:28 -- scripts/common.sh@335 -- # read -ra ver1 00:16:16.570 00:50:29 -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.570 00:50:29 -- scripts/common.sh@336 -- # read -ra ver2 00:16:16.570 00:50:29 -- scripts/common.sh@337 -- # local 'op=<' 00:16:16.570 00:50:29 -- scripts/common.sh@339 -- # ver1_l=2 00:16:16.570 00:50:29 -- scripts/common.sh@340 -- # ver2_l=1 00:16:16.570 00:50:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:16.570 00:50:29 -- scripts/common.sh@343 -- # case "$op" in 00:16:16.570 00:50:29 -- scripts/common.sh@344 -- # : 1 00:16:16.570 00:50:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:16.570 00:50:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.570 00:50:29 -- scripts/common.sh@364 -- # decimal 1 00:16:16.570 00:50:29 -- scripts/common.sh@352 -- # local d=1 00:16:16.570 00:50:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.570 00:50:29 -- scripts/common.sh@354 -- # echo 1 00:16:16.570 00:50:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:16.570 00:50:29 -- scripts/common.sh@365 -- # decimal 2 00:16:16.570 00:50:29 -- scripts/common.sh@352 -- # local d=2 00:16:16.570 00:50:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.570 00:50:29 -- scripts/common.sh@354 -- # echo 2 00:16:16.570 00:50:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:16.570 00:50:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:16.570 00:50:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:16.570 00:50:29 -- scripts/common.sh@367 -- # return 0 00:16:16.570 00:50:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.570 00:50:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:16.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.570 --rc genhtml_branch_coverage=1 00:16:16.570 --rc genhtml_function_coverage=1 00:16:16.570 --rc genhtml_legend=1 00:16:16.570 --rc geninfo_all_blocks=1 00:16:16.570 --rc geninfo_unexecuted_blocks=1 00:16:16.570 00:16:16.570 ' 00:16:16.570 00:50:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:16.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.570 --rc genhtml_branch_coverage=1 00:16:16.570 --rc genhtml_function_coverage=1 00:16:16.570 --rc genhtml_legend=1 00:16:16.570 --rc geninfo_all_blocks=1 00:16:16.570 --rc geninfo_unexecuted_blocks=1 00:16:16.570 00:16:16.570 ' 00:16:16.570 00:50:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:16.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.570 --rc genhtml_branch_coverage=1 00:16:16.570 --rc genhtml_function_coverage=1 00:16:16.570 --rc genhtml_legend=1 00:16:16.570 --rc geninfo_all_blocks=1 00:16:16.570 --rc geninfo_unexecuted_blocks=1 00:16:16.570 00:16:16.570 ' 00:16:16.570 00:50:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:16.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.570 --rc genhtml_branch_coverage=1 00:16:16.570 --rc genhtml_function_coverage=1 00:16:16.570 --rc genhtml_legend=1 00:16:16.570 --rc geninfo_all_blocks=1 00:16:16.570 --rc geninfo_unexecuted_blocks=1 00:16:16.570 00:16:16.570 ' 00:16:16.570 00:50:29 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.570 00:50:29 -- nvmf/common.sh@7 -- # uname -s 00:16:16.570 00:50:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.570 00:50:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.570 00:50:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.570 00:50:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.570 00:50:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.570 00:50:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.570 00:50:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.570 00:50:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.570 00:50:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.570 00:50:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.570 00:50:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:16:16.571 00:50:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:16:16.571 00:50:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.571 00:50:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.571 00:50:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.571 00:50:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.571 00:50:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.571 00:50:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.571 00:50:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.571 00:50:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.571 00:50:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.571 00:50:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.571 00:50:29 -- paths/export.sh@5 -- # export PATH 00:16:16.571 00:50:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.571 00:50:29 -- nvmf/common.sh@46 -- # : 0 00:16:16.571 00:50:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.571 00:50:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.571 00:50:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.571 00:50:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.571 00:50:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.571 00:50:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.571 00:50:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.571 00:50:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.571 00:50:29 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.571 00:50:29 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.571 00:50:29 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:16.571 00:50:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:16.571 00:50:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.571 00:50:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.571 00:50:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.571 00:50:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.571 00:50:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.571 00:50:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.571 00:50:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.571 00:50:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:16.571 00:50:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:16.571 00:50:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:16.571 00:50:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:16.571 00:50:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:16.571 00:50:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:16.571 00:50:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.571 00:50:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.571 00:50:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.571 00:50:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:16.571 00:50:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.571 00:50:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.571 00:50:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.571 00:50:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.571 00:50:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.571 00:50:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.571 00:50:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.571 00:50:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.571 00:50:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:16.571 00:50:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:16.571 Cannot find device "nvmf_tgt_br" 00:16:16.571 00:50:29 -- nvmf/common.sh@154 -- # true 00:16:16.571 00:50:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.830 Cannot find device "nvmf_tgt_br2" 00:16:16.830 00:50:29 -- nvmf/common.sh@155 -- # true 00:16:16.830 00:50:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:16.830 00:50:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:16.830 Cannot find device "nvmf_tgt_br" 00:16:16.830 00:50:29 -- nvmf/common.sh@157 -- # true 00:16:16.830 00:50:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:16.830 Cannot find device "nvmf_tgt_br2" 00:16:16.830 00:50:29 -- nvmf/common.sh@158 -- # true 00:16:16.830 00:50:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:16.830 00:50:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:16.830 00:50:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.830 00:50:29 -- nvmf/common.sh@161 -- # true 00:16:16.830 00:50:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.830 00:50:29 -- nvmf/common.sh@162 -- # true 00:16:16.830 00:50:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.830 00:50:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.830 00:50:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.830 00:50:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.830 00:50:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.830 00:50:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.830 00:50:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.830 00:50:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.830 00:50:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.830 00:50:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.830 00:50:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.830 00:50:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.830 00:50:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.830 00:50:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.830 00:50:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.830 00:50:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.830 00:50:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:16.830 00:50:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:16.830 00:50:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.830 00:50:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.830 00:50:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.831 00:50:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.831 00:50:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.831 00:50:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:16.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:16.831 00:16:16.831 --- 10.0.0.2 ping statistics --- 00:16:16.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.831 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:16.831 00:50:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:17.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:17.090 00:16:17.090 --- 10.0.0.3 ping statistics --- 00:16:17.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.090 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:17.090 00:50:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:17.090 00:16:17.090 --- 10.0.0.1 ping statistics --- 00:16:17.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.090 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:17.090 00:50:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.090 00:50:29 -- nvmf/common.sh@421 -- # return 0 00:16:17.090 00:50:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.090 00:50:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.090 00:50:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.090 00:50:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.090 00:50:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.090 00:50:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.090 00:50:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.090 00:50:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:17.090 00:50:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.090 00:50:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.090 00:50:29 -- common/autotest_common.sh@10 -- # set +x 00:16:17.090 00:50:29 -- nvmf/common.sh@469 -- # nvmfpid=87938 00:16:17.090 00:50:29 -- nvmf/common.sh@470 -- # waitforlisten 87938 00:16:17.090 00:50:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:17.090 00:50:29 -- common/autotest_common.sh@829 -- # '[' -z 87938 ']' 00:16:17.090 00:50:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.090 00:50:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.090 00:50:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.090 00:50:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.090 00:50:29 -- common/autotest_common.sh@10 -- # set +x 00:16:17.090 [2024-12-03 00:50:29.434083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.090 [2024-12-03 00:50:29.434164] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.090 [2024-12-03 00:50:29.574173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.349 [2024-12-03 00:50:29.636345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.349 [2024-12-03 00:50:29.636505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.349 [2024-12-03 00:50:29.636519] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.349 [2024-12-03 00:50:29.636527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.349 [2024-12-03 00:50:29.636700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:17.349 [2024-12-03 00:50:29.637702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:17.349 [2024-12-03 00:50:29.637839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:17.349 [2024-12-03 00:50:29.637843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.285 00:50:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.285 00:50:30 -- common/autotest_common.sh@862 -- # return 0 00:16:18.285 00:50:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.285 00:50:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.285 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 00:50:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.285 00:50:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.285 00:50:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.285 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 [2024-12-03 00:50:30.505605] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.285 00:50:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.285 00:50:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.285 00:50:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.285 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 Malloc0 00:16:18.285 00:50:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.285 00:50:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.285 00:50:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.285 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 00:50:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.285 00:50:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.285 00:50:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.285 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 00:50:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.285 00:50:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.285 00:50:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.285 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 [2024-12-03 00:50:30.594249] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.285 00:50:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.285 00:50:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:18.285 00:50:30 -- nvmf/common.sh@520 -- # config=() 00:16:18.285 00:50:30 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:18.285 00:50:30 -- nvmf/common.sh@520 -- # local subsystem config 00:16:18.285 00:50:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:18.285 00:50:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:18.285 { 00:16:18.285 "params": { 00:16:18.285 "name": "Nvme$subsystem", 00:16:18.285 "trtype": "$TEST_TRANSPORT", 00:16:18.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.285 "adrfam": "ipv4", 00:16:18.285 "trsvcid": "$NVMF_PORT", 00:16:18.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.285 "hdgst": ${hdgst:-false}, 00:16:18.285 "ddgst": ${ddgst:-false} 00:16:18.285 }, 00:16:18.285 "method": "bdev_nvme_attach_controller" 00:16:18.285 } 00:16:18.285 EOF 00:16:18.285 )") 00:16:18.285 00:50:30 -- nvmf/common.sh@542 -- # cat 00:16:18.285 00:50:30 -- nvmf/common.sh@544 -- # jq . 00:16:18.285 00:50:30 -- nvmf/common.sh@545 -- # IFS=, 00:16:18.285 00:50:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:18.285 "params": { 00:16:18.285 "name": "Nvme1", 00:16:18.285 "trtype": "tcp", 00:16:18.285 "traddr": "10.0.0.2", 00:16:18.285 "adrfam": "ipv4", 00:16:18.285 "trsvcid": "4420", 00:16:18.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.285 "hdgst": false, 00:16:18.285 "ddgst": false 00:16:18.285 }, 00:16:18.285 "method": "bdev_nvme_attach_controller" 00:16:18.285 }' 00:16:18.285 [2024-12-03 00:50:30.657655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:18.285 [2024-12-03 00:50:30.657760] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87992 ] 00:16:18.544 [2024-12-03 00:50:30.802250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.544 [2024-12-03 00:50:30.886221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.544 [2024-12-03 00:50:30.886358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.544 [2024-12-03 00:50:30.886368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.803 [2024-12-03 00:50:31.097733] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:18.803 [2024-12-03 00:50:31.097797] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:18.803 I/O targets: 00:16:18.803 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:18.803 00:16:18.803 00:16:18.804 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.804 http://cunit.sourceforge.net/ 00:16:18.804 00:16:18.804 00:16:18.804 Suite: bdevio tests on: Nvme1n1 00:16:18.804 Test: blockdev write read block ...passed 00:16:18.804 Test: blockdev write zeroes read block ...passed 00:16:18.804 Test: blockdev write zeroes read no split ...passed 00:16:18.804 Test: blockdev write zeroes read split ...passed 00:16:18.804 Test: blockdev write zeroes read split partial ...passed 00:16:18.804 Test: blockdev reset ...[2024-12-03 00:50:31.212205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:18.804 [2024-12-03 00:50:31.212307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106aed0 (9): Bad file descriptor 00:16:18.804 [2024-12-03 00:50:31.226749] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:18.804 passed 00:16:18.804 Test: blockdev write read 8 blocks ...passed 00:16:18.804 Test: blockdev write read size > 128k ...passed 00:16:18.804 Test: blockdev write read invalid size ...passed 00:16:18.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:18.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:18.804 Test: blockdev write read max offset ...passed 00:16:19.063 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:19.063 Test: blockdev writev readv 8 blocks ...passed 00:16:19.063 Test: blockdev writev readv 30 x 1block ...passed 00:16:19.063 Test: blockdev writev readv block ...passed 00:16:19.063 Test: blockdev writev readv size > 128k ...passed 00:16:19.063 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:19.063 Test: blockdev comparev and writev ...[2024-12-03 00:50:31.397013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.397050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.397077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.397086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.397386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.397421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.397449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.397459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.397898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.397923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.397938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.397947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.398296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.398319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.398335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.063 [2024-12-03 00:50:31.398344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:19.063 passed 00:16:19.063 Test: blockdev nvme passthru rw ...passed 00:16:19.063 Test: blockdev nvme passthru vendor specific ...[2024-12-03 00:50:31.480800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.063 [2024-12-03 00:50:31.480832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.480962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.063 [2024-12-03 00:50:31.480976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:19.063 [2024-12-03 00:50:31.481097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.063 [2024-12-03 00:50:31.481130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:19.063 passed 00:16:19.063 Test: blockdev nvme admin passthru ...[2024-12-03 00:50:31.481260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.063 [2024-12-03 00:50:31.481278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:19.063 passed 00:16:19.063 Test: blockdev copy ...passed 00:16:19.063 00:16:19.063 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.063 suites 1 1 n/a 0 0 00:16:19.063 tests 23 23 23 0 0 00:16:19.063 asserts 152 152 152 0 n/a 00:16:19.063 00:16:19.063 Elapsed time = 0.871 seconds 00:16:19.322 00:50:31 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.322 00:50:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.322 00:50:31 -- common/autotest_common.sh@10 -- # set +x 00:16:19.322 00:50:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.322 00:50:31 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:19.322 00:50:31 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:19.322 00:50:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:19.322 00:50:31 -- nvmf/common.sh@116 -- # sync 00:16:19.580 00:50:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:19.580 00:50:31 -- nvmf/common.sh@119 -- # set +e 00:16:19.580 00:50:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:19.580 00:50:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:19.580 rmmod nvme_tcp 00:16:19.580 rmmod nvme_fabrics 00:16:19.580 rmmod nvme_keyring 00:16:19.580 00:50:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:19.580 00:50:31 -- nvmf/common.sh@123 -- # set -e 00:16:19.580 00:50:31 -- nvmf/common.sh@124 -- # return 0 00:16:19.580 00:50:31 -- nvmf/common.sh@477 -- # '[' -n 87938 ']' 00:16:19.580 00:50:31 -- nvmf/common.sh@478 -- # killprocess 87938 00:16:19.580 00:50:31 -- common/autotest_common.sh@936 -- # '[' -z 87938 ']' 00:16:19.580 00:50:31 -- common/autotest_common.sh@940 -- # kill -0 87938 00:16:19.580 00:50:31 -- common/autotest_common.sh@941 -- # uname 00:16:19.580 00:50:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.580 00:50:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87938 00:16:19.580 00:50:31 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:19.580 00:50:31 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:19.580 killing process with pid 87938 00:16:19.580 00:50:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87938' 00:16:19.580 00:50:31 -- common/autotest_common.sh@955 -- # kill 87938 00:16:19.580 00:50:31 -- common/autotest_common.sh@960 -- # wait 87938 00:16:19.839 00:50:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:19.839 00:50:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:19.839 00:50:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:19.839 00:50:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.839 00:50:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:19.839 00:50:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.839 00:50:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.839 00:50:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.839 00:50:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:19.839 00:16:19.839 real 0m3.431s 00:16:19.839 user 0m12.479s 00:16:19.839 sys 0m0.893s 00:16:19.839 00:50:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:19.839 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:19.839 ************************************ 00:16:19.839 END TEST nvmf_bdevio 00:16:19.839 ************************************ 00:16:19.839 00:50:32 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:19.839 00:50:32 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:19.839 00:50:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:19.839 00:50:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.839 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:19.839 ************************************ 00:16:19.839 START TEST nvmf_bdevio_no_huge 00:16:19.839 ************************************ 00:16:19.839 00:50:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:20.096 * Looking for test storage... 00:16:20.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:20.096 00:50:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:20.096 00:50:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:20.096 00:50:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:20.096 00:50:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:20.096 00:50:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:20.096 00:50:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:20.097 00:50:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:20.097 00:50:32 -- scripts/common.sh@335 -- # IFS=.-: 00:16:20.097 00:50:32 -- scripts/common.sh@335 -- # read -ra ver1 00:16:20.097 00:50:32 -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.097 00:50:32 -- scripts/common.sh@336 -- # read -ra ver2 00:16:20.097 00:50:32 -- scripts/common.sh@337 -- # local 'op=<' 00:16:20.097 00:50:32 -- scripts/common.sh@339 -- # ver1_l=2 00:16:20.097 00:50:32 -- scripts/common.sh@340 -- # ver2_l=1 00:16:20.097 00:50:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:20.097 00:50:32 -- scripts/common.sh@343 -- # case "$op" in 00:16:20.097 00:50:32 -- scripts/common.sh@344 -- # : 1 00:16:20.097 00:50:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:20.097 00:50:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.097 00:50:32 -- scripts/common.sh@364 -- # decimal 1 00:16:20.097 00:50:32 -- scripts/common.sh@352 -- # local d=1 00:16:20.097 00:50:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.097 00:50:32 -- scripts/common.sh@354 -- # echo 1 00:16:20.097 00:50:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:20.097 00:50:32 -- scripts/common.sh@365 -- # decimal 2 00:16:20.097 00:50:32 -- scripts/common.sh@352 -- # local d=2 00:16:20.097 00:50:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.097 00:50:32 -- scripts/common.sh@354 -- # echo 2 00:16:20.097 00:50:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:20.097 00:50:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:20.097 00:50:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:20.097 00:50:32 -- scripts/common.sh@367 -- # return 0 00:16:20.097 00:50:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.097 00:50:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.097 --rc genhtml_branch_coverage=1 00:16:20.097 --rc genhtml_function_coverage=1 00:16:20.097 --rc genhtml_legend=1 00:16:20.097 --rc geninfo_all_blocks=1 00:16:20.097 --rc geninfo_unexecuted_blocks=1 00:16:20.097 00:16:20.097 ' 00:16:20.097 00:50:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.097 --rc genhtml_branch_coverage=1 00:16:20.097 --rc genhtml_function_coverage=1 00:16:20.097 --rc genhtml_legend=1 00:16:20.097 --rc geninfo_all_blocks=1 00:16:20.097 --rc geninfo_unexecuted_blocks=1 00:16:20.097 00:16:20.097 ' 00:16:20.097 00:50:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.097 --rc genhtml_branch_coverage=1 00:16:20.097 --rc genhtml_function_coverage=1 00:16:20.097 --rc genhtml_legend=1 00:16:20.097 --rc geninfo_all_blocks=1 00:16:20.097 --rc geninfo_unexecuted_blocks=1 00:16:20.097 00:16:20.097 ' 00:16:20.097 00:50:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.097 --rc genhtml_branch_coverage=1 00:16:20.097 --rc genhtml_function_coverage=1 00:16:20.097 --rc genhtml_legend=1 00:16:20.097 --rc geninfo_all_blocks=1 00:16:20.097 --rc geninfo_unexecuted_blocks=1 00:16:20.097 00:16:20.097 ' 00:16:20.097 00:50:32 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.097 00:50:32 -- nvmf/common.sh@7 -- # uname -s 00:16:20.097 00:50:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.097 00:50:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.097 00:50:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.097 00:50:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.097 00:50:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.097 00:50:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.097 00:50:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.097 00:50:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.097 00:50:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.097 00:50:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.097 00:50:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:16:20.097 00:50:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:16:20.097 00:50:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.097 00:50:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.097 00:50:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.097 00:50:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.097 00:50:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.097 00:50:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.097 00:50:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.097 00:50:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.097 00:50:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.097 00:50:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.097 00:50:32 -- paths/export.sh@5 -- # export PATH 00:16:20.097 00:50:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.097 00:50:32 -- nvmf/common.sh@46 -- # : 0 00:16:20.097 00:50:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:20.097 00:50:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:20.097 00:50:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:20.097 00:50:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.097 00:50:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.097 00:50:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:20.097 00:50:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:20.097 00:50:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:20.097 00:50:32 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.097 00:50:32 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.097 00:50:32 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:20.097 00:50:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:20.097 00:50:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.097 00:50:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:20.097 00:50:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:20.097 00:50:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:20.097 00:50:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.097 00:50:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.097 00:50:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.097 00:50:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:20.097 00:50:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:20.097 00:50:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:20.097 00:50:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:20.097 00:50:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:20.097 00:50:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:20.097 00:50:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.097 00:50:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.097 00:50:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:20.097 00:50:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:20.097 00:50:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.097 00:50:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.097 00:50:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.097 00:50:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.097 00:50:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.097 00:50:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.097 00:50:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.097 00:50:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.097 00:50:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:20.097 00:50:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:20.097 Cannot find device "nvmf_tgt_br" 00:16:20.097 00:50:32 -- nvmf/common.sh@154 -- # true 00:16:20.097 00:50:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.097 Cannot find device "nvmf_tgt_br2" 00:16:20.097 00:50:32 -- nvmf/common.sh@155 -- # true 00:16:20.097 00:50:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:20.097 00:50:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:20.097 Cannot find device "nvmf_tgt_br" 00:16:20.097 00:50:32 -- nvmf/common.sh@157 -- # true 00:16:20.097 00:50:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:20.355 Cannot find device "nvmf_tgt_br2" 00:16:20.355 00:50:32 -- nvmf/common.sh@158 -- # true 00:16:20.355 00:50:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:20.355 00:50:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:20.355 00:50:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.355 00:50:32 -- nvmf/common.sh@161 -- # true 00:16:20.355 00:50:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.355 00:50:32 -- nvmf/common.sh@162 -- # true 00:16:20.355 00:50:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.355 00:50:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.355 00:50:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.355 00:50:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.355 00:50:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.355 00:50:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.355 00:50:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.355 00:50:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.355 00:50:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.355 00:50:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:20.355 00:50:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:20.355 00:50:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:20.355 00:50:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:20.355 00:50:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.355 00:50:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.355 00:50:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.355 00:50:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:20.355 00:50:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:20.355 00:50:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.355 00:50:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.355 00:50:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.355 00:50:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.355 00:50:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.355 00:50:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:20.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:16:20.355 00:16:20.355 --- 10.0.0.2 ping statistics --- 00:16:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.355 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:20.355 00:50:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:20.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:20.355 00:16:20.355 --- 10.0.0.3 ping statistics --- 00:16:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.355 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:20.355 00:50:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:20.355 00:16:20.355 --- 10.0.0.1 ping statistics --- 00:16:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.355 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:20.355 00:50:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.355 00:50:32 -- nvmf/common.sh@421 -- # return 0 00:16:20.355 00:50:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:20.355 00:50:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.355 00:50:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:20.355 00:50:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:20.355 00:50:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.355 00:50:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:20.355 00:50:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:20.615 00:50:32 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:20.615 00:50:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:20.615 00:50:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.615 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:20.615 00:50:32 -- nvmf/common.sh@469 -- # nvmfpid=88185 00:16:20.615 00:50:32 -- nvmf/common.sh@470 -- # waitforlisten 88185 00:16:20.615 00:50:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:20.615 00:50:32 -- common/autotest_common.sh@829 -- # '[' -z 88185 ']' 00:16:20.615 00:50:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.615 00:50:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.615 00:50:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.615 00:50:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.615 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:20.615 [2024-12-03 00:50:32.934719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.615 [2024-12-03 00:50:32.935190] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:20.615 [2024-12-03 00:50:33.072017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.874 [2024-12-03 00:50:33.165985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:20.874 [2024-12-03 00:50:33.166145] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.874 [2024-12-03 00:50:33.166159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.874 [2024-12-03 00:50:33.166168] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.874 [2024-12-03 00:50:33.166348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:20.874 [2024-12-03 00:50:33.167527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:20.874 [2024-12-03 00:50:33.167642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:20.874 [2024-12-03 00:50:33.167654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.441 00:50:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.441 00:50:33 -- common/autotest_common.sh@862 -- # return 0 00:16:21.441 00:50:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:21.441 00:50:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.441 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 00:50:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.441 00:50:33 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.441 00:50:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 [2024-12-03 00:50:33.918430] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.441 00:50:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.441 00:50:33 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.441 00:50:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 Malloc0 00:16:21.441 00:50:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.441 00:50:33 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.441 00:50:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 00:50:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.441 00:50:33 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.441 00:50:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.700 00:50:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.700 00:50:33 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.700 00:50:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.700 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.700 [2024-12-03 00:50:33.960787] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.700 00:50:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.700 00:50:33 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:21.700 00:50:33 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:21.700 00:50:33 -- nvmf/common.sh@520 -- # config=() 00:16:21.700 00:50:33 -- nvmf/common.sh@520 -- # local subsystem config 00:16:21.700 00:50:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:21.700 00:50:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:21.700 { 00:16:21.700 "params": { 00:16:21.700 "name": "Nvme$subsystem", 00:16:21.700 "trtype": "$TEST_TRANSPORT", 00:16:21.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.700 "adrfam": "ipv4", 00:16:21.700 "trsvcid": "$NVMF_PORT", 00:16:21.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.700 "hdgst": ${hdgst:-false}, 00:16:21.700 "ddgst": ${ddgst:-false} 00:16:21.700 }, 00:16:21.700 "method": "bdev_nvme_attach_controller" 00:16:21.700 } 00:16:21.700 EOF 00:16:21.700 )") 00:16:21.700 00:50:33 -- nvmf/common.sh@542 -- # cat 00:16:21.700 00:50:33 -- nvmf/common.sh@544 -- # jq . 00:16:21.700 00:50:33 -- nvmf/common.sh@545 -- # IFS=, 00:16:21.700 00:50:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:21.700 "params": { 00:16:21.700 "name": "Nvme1", 00:16:21.700 "trtype": "tcp", 00:16:21.700 "traddr": "10.0.0.2", 00:16:21.700 "adrfam": "ipv4", 00:16:21.700 "trsvcid": "4420", 00:16:21.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.700 "hdgst": false, 00:16:21.700 "ddgst": false 00:16:21.700 }, 00:16:21.700 "method": "bdev_nvme_attach_controller" 00:16:21.700 }' 00:16:21.700 [2024-12-03 00:50:34.010151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:21.700 [2024-12-03 00:50:34.010253] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88241 ] 00:16:21.700 [2024-12-03 00:50:34.140573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.959 [2024-12-03 00:50:34.251801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.959 [2024-12-03 00:50:34.251939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.959 [2024-12-03 00:50:34.251944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.959 [2024-12-03 00:50:34.420924] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:21.959 [2024-12-03 00:50:34.420988] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:21.959 I/O targets: 00:16:21.959 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:21.959 00:16:21.959 00:16:21.959 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.959 http://cunit.sourceforge.net/ 00:16:21.959 00:16:21.959 00:16:21.959 Suite: bdevio tests on: Nvme1n1 00:16:21.959 Test: blockdev write read block ...passed 00:16:22.218 Test: blockdev write zeroes read block ...passed 00:16:22.218 Test: blockdev write zeroes read no split ...passed 00:16:22.218 Test: blockdev write zeroes read split ...passed 00:16:22.218 Test: blockdev write zeroes read split partial ...passed 00:16:22.218 Test: blockdev reset ...[2024-12-03 00:50:34.545848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:22.218 [2024-12-03 00:50:34.545959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480820 (9): Bad file descriptor 00:16:22.218 [2024-12-03 00:50:34.559718] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:22.218 passed 00:16:22.218 Test: blockdev write read 8 blocks ...passed 00:16:22.218 Test: blockdev write read size > 128k ...passed 00:16:22.218 Test: blockdev write read invalid size ...passed 00:16:22.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:22.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:22.218 Test: blockdev write read max offset ...passed 00:16:22.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:22.218 Test: blockdev writev readv 8 blocks ...passed 00:16:22.218 Test: blockdev writev readv 30 x 1block ...passed 00:16:22.218 Test: blockdev writev readv block ...passed 00:16:22.218 Test: blockdev writev readv size > 128k ...passed 00:16:22.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:22.476 Test: blockdev comparev and writev ...[2024-12-03 00:50:34.733402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.476 [2024-12-03 00:50:34.733479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:22.476 [2024-12-03 00:50:34.733514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.476 [2024-12-03 00:50:34.733523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:22.476 [2024-12-03 00:50:34.734016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.476 [2024-12-03 00:50:34.734041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:22.476 [2024-12-03 00:50:34.734057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.476 [2024-12-03 00:50:34.734067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.734826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.477 [2024-12-03 00:50:34.734867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.734914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.477 [2024-12-03 00:50:34.734923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.735294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.477 [2024-12-03 00:50:34.735317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.735350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.477 [2024-12-03 00:50:34.735360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:22.477 passed 00:16:22.477 Test: blockdev nvme passthru rw ...passed 00:16:22.477 Test: blockdev nvme passthru vendor specific ...[2024-12-03 00:50:34.817784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.477 [2024-12-03 00:50:34.817814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.818009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.477 [2024-12-03 00:50:34.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.818142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.477 [2024-12-03 00:50:34.818157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:22.477 [2024-12-03 00:50:34.818309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.477 [2024-12-03 00:50:34.818325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:22.477 passed 00:16:22.477 Test: blockdev nvme admin passthru ...passed 00:16:22.477 Test: blockdev copy ...passed 00:16:22.477 00:16:22.477 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.477 suites 1 1 n/a 0 0 00:16:22.477 tests 23 23 23 0 0 00:16:22.477 asserts 152 152 152 0 n/a 00:16:22.477 00:16:22.477 Elapsed time = 0.917 seconds 00:16:22.734 00:50:35 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.734 00:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.734 00:50:35 -- common/autotest_common.sh@10 -- # set +x 00:16:22.734 00:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.734 00:50:35 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:22.734 00:50:35 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:22.734 00:50:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.734 00:50:35 -- nvmf/common.sh@116 -- # sync 00:16:22.992 00:50:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.992 00:50:35 -- nvmf/common.sh@119 -- # set +e 00:16:22.992 00:50:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.992 00:50:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.992 rmmod nvme_tcp 00:16:22.992 rmmod nvme_fabrics 00:16:22.992 rmmod nvme_keyring 00:16:22.992 00:50:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.992 00:50:35 -- nvmf/common.sh@123 -- # set -e 00:16:22.992 00:50:35 -- nvmf/common.sh@124 -- # return 0 00:16:22.992 00:50:35 -- nvmf/common.sh@477 -- # '[' -n 88185 ']' 00:16:22.992 00:50:35 -- nvmf/common.sh@478 -- # killprocess 88185 00:16:22.992 00:50:35 -- common/autotest_common.sh@936 -- # '[' -z 88185 ']' 00:16:22.992 00:50:35 -- common/autotest_common.sh@940 -- # kill -0 88185 00:16:22.992 00:50:35 -- common/autotest_common.sh@941 -- # uname 00:16:22.992 00:50:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.992 00:50:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88185 00:16:22.992 00:50:35 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:22.992 00:50:35 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:22.992 killing process with pid 88185 00:16:22.992 00:50:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88185' 00:16:22.992 00:50:35 -- common/autotest_common.sh@955 -- # kill 88185 00:16:22.992 00:50:35 -- common/autotest_common.sh@960 -- # wait 88185 00:16:23.251 00:50:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:23.251 00:50:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:23.251 00:50:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:23.251 00:50:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.251 00:50:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:23.251 00:50:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.251 00:50:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.251 00:50:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.510 00:50:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:23.510 00:16:23.510 real 0m3.438s 00:16:23.510 user 0m12.151s 00:16:23.510 sys 0m1.272s 00:16:23.510 00:50:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:23.510 00:50:35 -- common/autotest_common.sh@10 -- # set +x 00:16:23.510 ************************************ 00:16:23.510 END TEST nvmf_bdevio_no_huge 00:16:23.510 ************************************ 00:16:23.510 00:50:35 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:23.510 00:50:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:23.510 00:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.510 00:50:35 -- common/autotest_common.sh@10 -- # set +x 00:16:23.510 ************************************ 00:16:23.510 START TEST nvmf_tls 00:16:23.510 ************************************ 00:16:23.510 00:50:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:23.510 * Looking for test storage... 00:16:23.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.510 00:50:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:23.510 00:50:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:23.510 00:50:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:23.510 00:50:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:23.510 00:50:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:23.510 00:50:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:23.510 00:50:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:23.510 00:50:35 -- scripts/common.sh@335 -- # IFS=.-: 00:16:23.510 00:50:35 -- scripts/common.sh@335 -- # read -ra ver1 00:16:23.510 00:50:35 -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.510 00:50:35 -- scripts/common.sh@336 -- # read -ra ver2 00:16:23.510 00:50:35 -- scripts/common.sh@337 -- # local 'op=<' 00:16:23.510 00:50:35 -- scripts/common.sh@339 -- # ver1_l=2 00:16:23.510 00:50:35 -- scripts/common.sh@340 -- # ver2_l=1 00:16:23.510 00:50:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:23.510 00:50:35 -- scripts/common.sh@343 -- # case "$op" in 00:16:23.510 00:50:35 -- scripts/common.sh@344 -- # : 1 00:16:23.510 00:50:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:23.510 00:50:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.510 00:50:35 -- scripts/common.sh@364 -- # decimal 1 00:16:23.510 00:50:35 -- scripts/common.sh@352 -- # local d=1 00:16:23.510 00:50:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.510 00:50:35 -- scripts/common.sh@354 -- # echo 1 00:16:23.510 00:50:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:23.510 00:50:35 -- scripts/common.sh@365 -- # decimal 2 00:16:23.510 00:50:35 -- scripts/common.sh@352 -- # local d=2 00:16:23.510 00:50:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.510 00:50:35 -- scripts/common.sh@354 -- # echo 2 00:16:23.510 00:50:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:23.510 00:50:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:23.510 00:50:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:23.510 00:50:35 -- scripts/common.sh@367 -- # return 0 00:16:23.510 00:50:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.510 00:50:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.510 --rc genhtml_branch_coverage=1 00:16:23.510 --rc genhtml_function_coverage=1 00:16:23.510 --rc genhtml_legend=1 00:16:23.510 --rc geninfo_all_blocks=1 00:16:23.510 --rc geninfo_unexecuted_blocks=1 00:16:23.510 00:16:23.510 ' 00:16:23.510 00:50:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.510 --rc genhtml_branch_coverage=1 00:16:23.510 --rc genhtml_function_coverage=1 00:16:23.510 --rc genhtml_legend=1 00:16:23.510 --rc geninfo_all_blocks=1 00:16:23.510 --rc geninfo_unexecuted_blocks=1 00:16:23.510 00:16:23.510 ' 00:16:23.510 00:50:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.510 --rc genhtml_branch_coverage=1 00:16:23.510 --rc genhtml_function_coverage=1 00:16:23.510 --rc genhtml_legend=1 00:16:23.510 --rc geninfo_all_blocks=1 00:16:23.510 --rc geninfo_unexecuted_blocks=1 00:16:23.510 00:16:23.510 ' 00:16:23.510 00:50:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.510 --rc genhtml_branch_coverage=1 00:16:23.510 --rc genhtml_function_coverage=1 00:16:23.510 --rc genhtml_legend=1 00:16:23.510 --rc geninfo_all_blocks=1 00:16:23.510 --rc geninfo_unexecuted_blocks=1 00:16:23.510 00:16:23.510 ' 00:16:23.510 00:50:35 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.510 00:50:35 -- nvmf/common.sh@7 -- # uname -s 00:16:23.510 00:50:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.510 00:50:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.510 00:50:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.510 00:50:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.510 00:50:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.510 00:50:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.510 00:50:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.510 00:50:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.510 00:50:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.510 00:50:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.510 00:50:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:16:23.510 00:50:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:16:23.511 00:50:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.511 00:50:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.511 00:50:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.511 00:50:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.511 00:50:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.511 00:50:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.511 00:50:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.511 00:50:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.511 00:50:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.511 00:50:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.511 00:50:36 -- paths/export.sh@5 -- # export PATH 00:16:23.511 00:50:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.511 00:50:36 -- nvmf/common.sh@46 -- # : 0 00:16:23.511 00:50:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:23.511 00:50:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:23.511 00:50:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:23.511 00:50:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.511 00:50:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.511 00:50:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:23.511 00:50:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:23.511 00:50:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:23.511 00:50:36 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.511 00:50:36 -- target/tls.sh@71 -- # nvmftestinit 00:16:23.511 00:50:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:23.511 00:50:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.511 00:50:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:23.511 00:50:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:23.511 00:50:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:23.511 00:50:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.511 00:50:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.511 00:50:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.770 00:50:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:23.770 00:50:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:23.770 00:50:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:23.770 00:50:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:23.770 00:50:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:23.770 00:50:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:23.770 00:50:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.770 00:50:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.770 00:50:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.770 00:50:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:23.770 00:50:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.770 00:50:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.770 00:50:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.770 00:50:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.770 00:50:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.770 00:50:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.770 00:50:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.770 00:50:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.770 00:50:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:23.770 00:50:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:23.770 Cannot find device "nvmf_tgt_br" 00:16:23.770 00:50:36 -- nvmf/common.sh@154 -- # true 00:16:23.770 00:50:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.770 Cannot find device "nvmf_tgt_br2" 00:16:23.770 00:50:36 -- nvmf/common.sh@155 -- # true 00:16:23.770 00:50:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:23.770 00:50:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:23.770 Cannot find device "nvmf_tgt_br" 00:16:23.770 00:50:36 -- nvmf/common.sh@157 -- # true 00:16:23.770 00:50:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:23.770 Cannot find device "nvmf_tgt_br2" 00:16:23.770 00:50:36 -- nvmf/common.sh@158 -- # true 00:16:23.770 00:50:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:23.770 00:50:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:23.770 00:50:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.770 00:50:36 -- nvmf/common.sh@161 -- # true 00:16:23.770 00:50:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.770 00:50:36 -- nvmf/common.sh@162 -- # true 00:16:23.770 00:50:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.770 00:50:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.770 00:50:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.770 00:50:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.770 00:50:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.770 00:50:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.770 00:50:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.770 00:50:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.770 00:50:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.770 00:50:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:23.770 00:50:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:23.770 00:50:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:23.770 00:50:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:23.770 00:50:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.770 00:50:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.029 00:50:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.029 00:50:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:24.029 00:50:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:24.029 00:50:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.029 00:50:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.029 00:50:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.029 00:50:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.029 00:50:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.029 00:50:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:24.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:24.029 00:16:24.029 --- 10.0.0.2 ping statistics --- 00:16:24.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.029 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:24.029 00:50:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:24.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:16:24.029 00:16:24.029 --- 10.0.0.3 ping statistics --- 00:16:24.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.029 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:24.029 00:50:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:24.029 00:16:24.029 --- 10.0.0.1 ping statistics --- 00:16:24.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.029 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:24.029 00:50:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.029 00:50:36 -- nvmf/common.sh@421 -- # return 0 00:16:24.029 00:50:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:24.029 00:50:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.029 00:50:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:24.029 00:50:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:24.029 00:50:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.029 00:50:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:24.029 00:50:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:24.029 00:50:36 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:24.029 00:50:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:24.029 00:50:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.029 00:50:36 -- common/autotest_common.sh@10 -- # set +x 00:16:24.029 00:50:36 -- nvmf/common.sh@469 -- # nvmfpid=88427 00:16:24.029 00:50:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:24.029 00:50:36 -- nvmf/common.sh@470 -- # waitforlisten 88427 00:16:24.029 00:50:36 -- common/autotest_common.sh@829 -- # '[' -z 88427 ']' 00:16:24.029 00:50:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.029 00:50:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.029 00:50:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.029 00:50:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.029 00:50:36 -- common/autotest_common.sh@10 -- # set +x 00:16:24.029 [2024-12-03 00:50:36.423928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:24.029 [2024-12-03 00:50:36.424005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.288 [2024-12-03 00:50:36.560087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.288 [2024-12-03 00:50:36.642135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.288 [2024-12-03 00:50:36.642345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.288 [2024-12-03 00:50:36.642363] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.288 [2024-12-03 00:50:36.642376] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.288 [2024-12-03 00:50:36.642430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.288 00:50:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.288 00:50:36 -- common/autotest_common.sh@862 -- # return 0 00:16:24.288 00:50:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.288 00:50:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.288 00:50:36 -- common/autotest_common.sh@10 -- # set +x 00:16:24.288 00:50:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.288 00:50:36 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:24.288 00:50:36 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:24.547 true 00:16:24.547 00:50:37 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:24.547 00:50:37 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:24.807 00:50:37 -- target/tls.sh@82 -- # version=0 00:16:24.807 00:50:37 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:24.807 00:50:37 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:25.066 00:50:37 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:25.066 00:50:37 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.325 00:50:37 -- target/tls.sh@90 -- # version=13 00:16:25.325 00:50:37 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:25.325 00:50:37 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:25.622 00:50:38 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.622 00:50:38 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:25.912 00:50:38 -- target/tls.sh@98 -- # version=7 00:16:25.912 00:50:38 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:25.912 00:50:38 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.912 00:50:38 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:26.183 00:50:38 -- target/tls.sh@105 -- # ktls=false 00:16:26.183 00:50:38 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:26.183 00:50:38 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:26.442 00:50:38 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.442 00:50:38 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:26.701 00:50:38 -- target/tls.sh@113 -- # ktls=true 00:16:26.701 00:50:38 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:26.701 00:50:38 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:26.701 00:50:39 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.701 00:50:39 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:26.959 00:50:39 -- target/tls.sh@121 -- # ktls=false 00:16:26.959 00:50:39 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:26.959 00:50:39 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:26.959 00:50:39 -- target/tls.sh@49 -- # local key hash crc 00:16:27.218 00:50:39 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:27.218 00:50:39 -- target/tls.sh@51 -- # hash=01 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # gzip -1 -c 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # tail -c8 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # head -c 4 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # crc='p$H�' 00:16:27.218 00:50:39 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:27.218 00:50:39 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:27.218 00:50:39 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.218 00:50:39 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.218 00:50:39 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:27.218 00:50:39 -- target/tls.sh@49 -- # local key hash crc 00:16:27.218 00:50:39 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:27.218 00:50:39 -- target/tls.sh@51 -- # hash=01 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # gzip -1 -c 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # head -c 4 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # tail -c8 00:16:27.218 00:50:39 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:27.218 00:50:39 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:27.218 00:50:39 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:27.218 00:50:39 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:27.218 00:50:39 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:27.218 00:50:39 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.218 00:50:39 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.219 00:50:39 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.219 00:50:39 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:27.219 00:50:39 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.219 00:50:39 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.219 00:50:39 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:27.219 00:50:39 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:27.790 00:50:40 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.790 00:50:40 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.790 00:50:40 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:27.790 [2024-12-03 00:50:40.213065] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.790 00:50:40 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:28.049 00:50:40 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:28.308 [2024-12-03 00:50:40.605094] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:28.308 [2024-12-03 00:50:40.605317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.308 00:50:40 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:28.308 malloc0 00:16:28.567 00:50:40 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:28.567 00:50:41 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:28.826 00:50:41 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.033 Initializing NVMe Controllers 00:16:41.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:41.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:41.033 Initialization complete. Launching workers. 00:16:41.033 ======================================================== 00:16:41.033 Latency(us) 00:16:41.033 Device Information : IOPS MiB/s Average min max 00:16:41.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11926.77 46.59 5366.99 1468.91 12059.02 00:16:41.033 ======================================================== 00:16:41.033 Total : 11926.77 46.59 5366.99 1468.91 12059.02 00:16:41.033 00:16:41.034 00:50:51 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.034 00:50:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:41.034 00:50:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:41.034 00:50:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:41.034 00:50:51 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:41.034 00:50:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:41.034 00:50:51 -- target/tls.sh@28 -- # bdevperf_pid=88779 00:16:41.034 00:50:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:41.034 00:50:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:41.034 00:50:51 -- target/tls.sh@31 -- # waitforlisten 88779 /var/tmp/bdevperf.sock 00:16:41.034 00:50:51 -- common/autotest_common.sh@829 -- # '[' -z 88779 ']' 00:16:41.034 00:50:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.034 00:50:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.034 00:50:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.034 00:50:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.034 00:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:41.034 [2024-12-03 00:50:51.475158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.034 [2024-12-03 00:50:51.475249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88779 ] 00:16:41.034 [2024-12-03 00:50:51.605477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.034 [2024-12-03 00:50:51.666938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.034 00:50:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.034 00:50:52 -- common/autotest_common.sh@862 -- # return 0 00:16:41.034 00:50:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.034 [2024-12-03 00:50:52.596081] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:41.034 TLSTESTn1 00:16:41.034 00:50:52 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:41.034 Running I/O for 10 seconds... 00:16:51.035 00:16:51.035 Latency(us) 00:16:51.035 [2024-12-03T00:51:03.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.035 [2024-12-03T00:51:03.550Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:51.035 Verification LBA range: start 0x0 length 0x2000 00:16:51.035 TLSTESTn1 : 10.01 6521.48 25.47 0.00 0.00 19598.12 3664.06 25856.93 00:16:51.035 [2024-12-03T00:51:03.550Z] =================================================================================================================== 00:16:51.035 [2024-12-03T00:51:03.550Z] Total : 6521.48 25.47 0.00 0.00 19598.12 3664.06 25856.93 00:16:51.035 0 00:16:51.035 00:51:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.035 00:51:02 -- target/tls.sh@45 -- # killprocess 88779 00:16:51.035 00:51:02 -- common/autotest_common.sh@936 -- # '[' -z 88779 ']' 00:16:51.035 00:51:02 -- common/autotest_common.sh@940 -- # kill -0 88779 00:16:51.035 00:51:02 -- common/autotest_common.sh@941 -- # uname 00:16:51.035 00:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.035 00:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88779 00:16:51.035 killing process with pid 88779 00:16:51.035 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.035 00:16:51.035 Latency(us) 00:16:51.035 [2024-12-03T00:51:03.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.035 [2024-12-03T00:51:03.550Z] =================================================================================================================== 00:16:51.035 [2024-12-03T00:51:03.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.035 00:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:51.035 00:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:51.035 00:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88779' 00:16:51.035 00:51:02 -- common/autotest_common.sh@955 -- # kill 88779 00:16:51.035 00:51:02 -- common/autotest_common.sh@960 -- # wait 88779 00:16:51.035 00:51:03 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:51.035 00:51:03 -- common/autotest_common.sh@650 -- # local es=0 00:16:51.035 00:51:03 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:51.035 00:51:03 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:51.035 00:51:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.035 00:51:03 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:51.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.035 00:51:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.035 00:51:03 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:51.035 00:51:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.035 00:51:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.035 00:51:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:51.035 00:51:03 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:51.035 00:51:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.035 00:51:03 -- target/tls.sh@28 -- # bdevperf_pid=88926 00:16:51.035 00:51:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.035 00:51:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.035 00:51:03 -- target/tls.sh@31 -- # waitforlisten 88926 /var/tmp/bdevperf.sock 00:16:51.035 00:51:03 -- common/autotest_common.sh@829 -- # '[' -z 88926 ']' 00:16:51.035 00:51:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.035 00:51:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.035 00:51:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.035 00:51:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.035 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:16:51.035 [2024-12-03 00:51:03.069273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:51.035 [2024-12-03 00:51:03.069378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88926 ] 00:16:51.035 [2024-12-03 00:51:03.195170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.035 [2024-12-03 00:51:03.253638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.603 00:51:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.603 00:51:04 -- common/autotest_common.sh@862 -- # return 0 00:16:51.603 00:51:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:51.862 [2024-12-03 00:51:04.339791] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.862 [2024-12-03 00:51:04.351199] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:51.862 [2024-12-03 00:51:04.351466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x678cc0 (107): Transport endpoint is not connected 00:16:51.862 [2024-12-03 00:51:04.352437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x678cc0 (9): Bad file descriptor 00:16:51.862 [2024-12-03 00:51:04.353435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:51.862 [2024-12-03 00:51:04.353478] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:51.862 [2024-12-03 00:51:04.353489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.862 2024/12/03 00:51:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:51.862 request: 00:16:51.862 { 00:16:51.862 "method": "bdev_nvme_attach_controller", 00:16:51.862 "params": { 00:16:51.862 "name": "TLSTEST", 00:16:51.862 "trtype": "tcp", 00:16:51.862 "traddr": "10.0.0.2", 00:16:51.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.862 "adrfam": "ipv4", 00:16:51.862 "trsvcid": "4420", 00:16:51.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.862 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:51.862 } 00:16:51.862 } 00:16:51.862 Got JSON-RPC error response 00:16:51.862 GoRPCClient: error on JSON-RPC call 00:16:52.122 00:51:04 -- target/tls.sh@36 -- # killprocess 88926 00:16:52.122 00:51:04 -- common/autotest_common.sh@936 -- # '[' -z 88926 ']' 00:16:52.122 00:51:04 -- common/autotest_common.sh@940 -- # kill -0 88926 00:16:52.122 00:51:04 -- common/autotest_common.sh@941 -- # uname 00:16:52.122 00:51:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.122 00:51:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88926 00:16:52.122 killing process with pid 88926 00:16:52.122 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.122 00:16:52.122 Latency(us) 00:16:52.122 [2024-12-03T00:51:04.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.122 [2024-12-03T00:51:04.637Z] =================================================================================================================== 00:16:52.122 [2024-12-03T00:51:04.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.122 00:51:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:52.122 00:51:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:52.122 00:51:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88926' 00:16:52.122 00:51:04 -- common/autotest_common.sh@955 -- # kill 88926 00:16:52.122 00:51:04 -- common/autotest_common.sh@960 -- # wait 88926 00:16:52.122 00:51:04 -- target/tls.sh@37 -- # return 1 00:16:52.122 00:51:04 -- common/autotest_common.sh@653 -- # es=1 00:16:52.122 00:51:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:52.122 00:51:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:52.122 00:51:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:52.122 00:51:04 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.122 00:51:04 -- common/autotest_common.sh@650 -- # local es=0 00:16:52.122 00:51:04 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.122 00:51:04 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:52.122 00:51:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.122 00:51:04 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:52.122 00:51:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.122 00:51:04 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.122 00:51:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:52.122 00:51:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:52.122 00:51:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:52.122 00:51:04 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:52.122 00:51:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:52.122 00:51:04 -- target/tls.sh@28 -- # bdevperf_pid=88972 00:16:52.122 00:51:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:52.122 00:51:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.122 00:51:04 -- target/tls.sh@31 -- # waitforlisten 88972 /var/tmp/bdevperf.sock 00:16:52.122 00:51:04 -- common/autotest_common.sh@829 -- # '[' -z 88972 ']' 00:16:52.122 00:51:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.122 00:51:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.122 00:51:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.122 00:51:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.122 00:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:52.381 [2024-12-03 00:51:04.653022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:52.381 [2024-12-03 00:51:04.653134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88972 ] 00:16:52.381 [2024-12-03 00:51:04.789857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.381 [2024-12-03 00:51:04.862836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.316 00:51:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.316 00:51:05 -- common/autotest_common.sh@862 -- # return 0 00:16:53.316 00:51:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.575 [2024-12-03 00:51:05.864766] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.575 [2024-12-03 00:51:05.870978] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.575 [2024-12-03 00:51:05.871011] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.575 [2024-12-03 00:51:05.871061] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:53.575 [2024-12-03 00:51:05.871318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc0cc0 (107): Transport endpoint is not connected 00:16:53.575 [2024-12-03 00:51:05.872276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc0cc0 (9): Bad file descriptor 00:16:53.575 [2024-12-03 00:51:05.873272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.575 [2024-12-03 00:51:05.873295] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:53.575 [2024-12-03 00:51:05.873321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.575 2024/12/03 00:51:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:53.575 request: 00:16:53.575 { 00:16:53.575 "method": "bdev_nvme_attach_controller", 00:16:53.575 "params": { 00:16:53.575 "name": "TLSTEST", 00:16:53.575 "trtype": "tcp", 00:16:53.575 "traddr": "10.0.0.2", 00:16:53.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:53.575 "adrfam": "ipv4", 00:16:53.575 "trsvcid": "4420", 00:16:53.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.575 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:53.575 } 00:16:53.575 } 00:16:53.575 Got JSON-RPC error response 00:16:53.575 GoRPCClient: error on JSON-RPC call 00:16:53.575 00:51:05 -- target/tls.sh@36 -- # killprocess 88972 00:16:53.575 00:51:05 -- common/autotest_common.sh@936 -- # '[' -z 88972 ']' 00:16:53.575 00:51:05 -- common/autotest_common.sh@940 -- # kill -0 88972 00:16:53.575 00:51:05 -- common/autotest_common.sh@941 -- # uname 00:16:53.575 00:51:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.575 00:51:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88972 00:16:53.575 00:51:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:53.575 killing process with pid 88972 00:16:53.575 00:51:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:53.575 00:51:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88972' 00:16:53.575 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.575 00:16:53.575 Latency(us) 00:16:53.575 [2024-12-03T00:51:06.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.575 [2024-12-03T00:51:06.090Z] =================================================================================================================== 00:16:53.575 [2024-12-03T00:51:06.090Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.575 00:51:05 -- common/autotest_common.sh@955 -- # kill 88972 00:16:53.575 00:51:05 -- common/autotest_common.sh@960 -- # wait 88972 00:16:53.834 00:51:06 -- target/tls.sh@37 -- # return 1 00:16:53.834 00:51:06 -- common/autotest_common.sh@653 -- # es=1 00:16:53.834 00:51:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.834 00:51:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.834 00:51:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.834 00:51:06 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.834 00:51:06 -- common/autotest_common.sh@650 -- # local es=0 00:16:53.835 00:51:06 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.835 00:51:06 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:53.835 00:51:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.835 00:51:06 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:53.835 00:51:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.835 00:51:06 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.835 00:51:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.835 00:51:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:53.835 00:51:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.835 00:51:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:53.835 00:51:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.835 00:51:06 -- target/tls.sh@28 -- # bdevperf_pid=89017 00:16:53.835 00:51:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.835 00:51:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.835 00:51:06 -- target/tls.sh@31 -- # waitforlisten 89017 /var/tmp/bdevperf.sock 00:16:53.835 00:51:06 -- common/autotest_common.sh@829 -- # '[' -z 89017 ']' 00:16:53.835 00:51:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.835 00:51:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.835 00:51:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.835 00:51:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.835 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.835 [2024-12-03 00:51:06.153925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:53.835 [2024-12-03 00:51:06.154011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89017 ] 00:16:53.835 [2024-12-03 00:51:06.278860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.835 [2024-12-03 00:51:06.343858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.772 00:51:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.772 00:51:07 -- common/autotest_common.sh@862 -- # return 0 00:16:54.772 00:51:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.031 [2024-12-03 00:51:07.380436] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.031 [2024-12-03 00:51:07.391119] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:55.031 [2024-12-03 00:51:07.391148] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:55.031 [2024-12-03 00:51:07.391189] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:55.031 [2024-12-03 00:51:07.391945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1152cc0 (107): Transport endpoint is not connected 00:16:55.031 [2024-12-03 00:51:07.392934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1152cc0 (9): Bad file descriptor 00:16:55.031 [2024-12-03 00:51:07.393930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:55.031 [2024-12-03 00:51:07.393964] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:55.031 [2024-12-03 00:51:07.393989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:55.031 2024/12/03 00:51:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:55.031 request: 00:16:55.031 { 00:16:55.031 "method": "bdev_nvme_attach_controller", 00:16:55.031 "params": { 00:16:55.031 "name": "TLSTEST", 00:16:55.031 "trtype": "tcp", 00:16:55.031 "traddr": "10.0.0.2", 00:16:55.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.031 "adrfam": "ipv4", 00:16:55.031 "trsvcid": "4420", 00:16:55.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:55.031 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:55.031 } 00:16:55.031 } 00:16:55.031 Got JSON-RPC error response 00:16:55.031 GoRPCClient: error on JSON-RPC call 00:16:55.031 00:51:07 -- target/tls.sh@36 -- # killprocess 89017 00:16:55.031 00:51:07 -- common/autotest_common.sh@936 -- # '[' -z 89017 ']' 00:16:55.031 00:51:07 -- common/autotest_common.sh@940 -- # kill -0 89017 00:16:55.031 00:51:07 -- common/autotest_common.sh@941 -- # uname 00:16:55.031 00:51:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:55.031 00:51:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89017 00:16:55.031 00:51:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:55.031 00:51:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:55.031 killing process with pid 89017 00:16:55.031 00:51:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89017' 00:16:55.031 00:51:07 -- common/autotest_common.sh@955 -- # kill 89017 00:16:55.031 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.031 00:16:55.031 Latency(us) 00:16:55.031 [2024-12-03T00:51:07.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.031 [2024-12-03T00:51:07.546Z] =================================================================================================================== 00:16:55.031 [2024-12-03T00:51:07.547Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.032 00:51:07 -- common/autotest_common.sh@960 -- # wait 89017 00:16:55.292 00:51:07 -- target/tls.sh@37 -- # return 1 00:16:55.292 00:51:07 -- common/autotest_common.sh@653 -- # es=1 00:16:55.292 00:51:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.292 00:51:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.292 00:51:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.292 00:51:07 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.292 00:51:07 -- common/autotest_common.sh@650 -- # local es=0 00:16:55.292 00:51:07 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.292 00:51:07 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:55.292 00:51:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.292 00:51:07 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:55.292 00:51:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.292 00:51:07 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.292 00:51:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:55.292 00:51:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:55.292 00:51:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:55.292 00:51:07 -- target/tls.sh@23 -- # psk= 00:16:55.292 00:51:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.292 00:51:07 -- target/tls.sh@28 -- # bdevperf_pid=89063 00:16:55.292 00:51:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.292 00:51:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.292 00:51:07 -- target/tls.sh@31 -- # waitforlisten 89063 /var/tmp/bdevperf.sock 00:16:55.292 00:51:07 -- common/autotest_common.sh@829 -- # '[' -z 89063 ']' 00:16:55.292 00:51:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.292 00:51:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.292 00:51:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.292 00:51:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.292 00:51:07 -- common/autotest_common.sh@10 -- # set +x 00:16:55.292 [2024-12-03 00:51:07.667412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:55.292 [2024-12-03 00:51:07.667561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89063 ] 00:16:55.292 [2024-12-03 00:51:07.792105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.551 [2024-12-03 00:51:07.859537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.119 00:51:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.119 00:51:08 -- common/autotest_common.sh@862 -- # return 0 00:16:56.119 00:51:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:56.378 [2024-12-03 00:51:08.851235] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:56.378 [2024-12-03 00:51:08.852512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd468c0 (9): Bad file descriptor 00:16:56.378 [2024-12-03 00:51:08.853507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:56.378 [2024-12-03 00:51:08.853528] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:56.378 [2024-12-03 00:51:08.853538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:56.378 2024/12/03 00:51:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:56.378 request: 00:16:56.378 { 00:16:56.378 "method": "bdev_nvme_attach_controller", 00:16:56.378 "params": { 00:16:56.378 "name": "TLSTEST", 00:16:56.378 "trtype": "tcp", 00:16:56.378 "traddr": "10.0.0.2", 00:16:56.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.378 "adrfam": "ipv4", 00:16:56.378 "trsvcid": "4420", 00:16:56.378 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.378 } 00:16:56.378 } 00:16:56.378 Got JSON-RPC error response 00:16:56.378 GoRPCClient: error on JSON-RPC call 00:16:56.378 00:51:08 -- target/tls.sh@36 -- # killprocess 89063 00:16:56.378 00:51:08 -- common/autotest_common.sh@936 -- # '[' -z 89063 ']' 00:16:56.379 00:51:08 -- common/autotest_common.sh@940 -- # kill -0 89063 00:16:56.379 00:51:08 -- common/autotest_common.sh@941 -- # uname 00:16:56.379 00:51:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.379 00:51:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89063 00:16:56.638 00:51:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:56.638 00:51:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:56.638 killing process with pid 89063 00:16:56.638 00:51:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89063' 00:16:56.638 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.638 00:16:56.638 Latency(us) 00:16:56.638 [2024-12-03T00:51:09.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.638 [2024-12-03T00:51:09.153Z] =================================================================================================================== 00:16:56.638 [2024-12-03T00:51:09.153Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.638 00:51:08 -- common/autotest_common.sh@955 -- # kill 89063 00:16:56.638 00:51:08 -- common/autotest_common.sh@960 -- # wait 89063 00:16:56.638 00:51:09 -- target/tls.sh@37 -- # return 1 00:16:56.638 00:51:09 -- common/autotest_common.sh@653 -- # es=1 00:16:56.638 00:51:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.638 00:51:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.638 00:51:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.638 00:51:09 -- target/tls.sh@167 -- # killprocess 88427 00:16:56.638 00:51:09 -- common/autotest_common.sh@936 -- # '[' -z 88427 ']' 00:16:56.638 00:51:09 -- common/autotest_common.sh@940 -- # kill -0 88427 00:16:56.638 00:51:09 -- common/autotest_common.sh@941 -- # uname 00:16:56.638 00:51:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.638 00:51:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88427 00:16:56.638 00:51:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:56.638 00:51:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:56.638 killing process with pid 88427 00:16:56.638 00:51:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88427' 00:16:56.638 00:51:09 -- common/autotest_common.sh@955 -- # kill 88427 00:16:56.638 00:51:09 -- common/autotest_common.sh@960 -- # wait 88427 00:16:56.897 00:51:09 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:56.897 00:51:09 -- target/tls.sh@49 -- # local key hash crc 00:16:56.897 00:51:09 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:56.897 00:51:09 -- target/tls.sh@51 -- # hash=02 00:16:56.897 00:51:09 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:56.897 00:51:09 -- target/tls.sh@52 -- # gzip -1 -c 00:16:56.897 00:51:09 -- target/tls.sh@52 -- # tail -c8 00:16:56.897 00:51:09 -- target/tls.sh@52 -- # head -c 4 00:16:56.897 00:51:09 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:56.897 00:51:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:56.897 00:51:09 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:56.897 00:51:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.897 00:51:09 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.897 00:51:09 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:56.897 00:51:09 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.897 00:51:09 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:56.897 00:51:09 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:56.897 00:51:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.897 00:51:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.897 00:51:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.897 00:51:09 -- nvmf/common.sh@469 -- # nvmfpid=89118 00:16:56.897 00:51:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.897 00:51:09 -- nvmf/common.sh@470 -- # waitforlisten 89118 00:16:56.897 00:51:09 -- common/autotest_common.sh@829 -- # '[' -z 89118 ']' 00:16:56.897 00:51:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.897 00:51:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.897 00:51:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.897 00:51:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.897 00:51:09 -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 [2024-12-03 00:51:09.437247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.156 [2024-12-03 00:51:09.437327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.156 [2024-12-03 00:51:09.567877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.156 [2024-12-03 00:51:09.626676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.156 [2024-12-03 00:51:09.626826] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.156 [2024-12-03 00:51:09.626838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.156 [2024-12-03 00:51:09.626846] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.156 [2024-12-03 00:51:09.626872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.090 00:51:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.090 00:51:10 -- common/autotest_common.sh@862 -- # return 0 00:16:58.090 00:51:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:58.090 00:51:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.090 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:16:58.090 00:51:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.090 00:51:10 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:58.090 00:51:10 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:58.090 00:51:10 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.350 [2024-12-03 00:51:10.647520] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.350 00:51:10 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:58.609 00:51:10 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:58.609 [2024-12-03 00:51:11.123586] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.609 [2024-12-03 00:51:11.123836] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.868 00:51:11 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:58.868 malloc0 00:16:58.868 00:51:11 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:59.126 00:51:11 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.384 00:51:11 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.384 00:51:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.384 00:51:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.384 00:51:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.384 00:51:11 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:59.384 00:51:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.384 00:51:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.384 00:51:11 -- target/tls.sh@28 -- # bdevperf_pid=89225 00:16:59.384 00:51:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.384 00:51:11 -- target/tls.sh@31 -- # waitforlisten 89225 /var/tmp/bdevperf.sock 00:16:59.384 00:51:11 -- common/autotest_common.sh@829 -- # '[' -z 89225 ']' 00:16:59.384 00:51:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.384 00:51:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.384 00:51:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.384 00:51:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.384 00:51:11 -- common/autotest_common.sh@10 -- # set +x 00:16:59.384 [2024-12-03 00:51:11.854088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:59.385 [2024-12-03 00:51:11.854231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89225 ] 00:16:59.643 [2024-12-03 00:51:11.985723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.643 [2024-12-03 00:51:12.050659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.578 00:51:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.578 00:51:12 -- common/autotest_common.sh@862 -- # return 0 00:17:00.578 00:51:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:00.578 [2024-12-03 00:51:13.018810] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.578 TLSTESTn1 00:17:00.837 00:51:13 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:00.837 Running I/O for 10 seconds... 00:17:10.823 00:17:10.823 Latency(us) 00:17:10.823 [2024-12-03T00:51:23.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.823 [2024-12-03T00:51:23.338Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.823 Verification LBA range: start 0x0 length 0x2000 00:17:10.823 TLSTESTn1 : 10.01 6552.23 25.59 0.00 0.00 19507.33 3991.74 20614.05 00:17:10.823 [2024-12-03T00:51:23.338Z] =================================================================================================================== 00:17:10.823 [2024-12-03T00:51:23.338Z] Total : 6552.23 25.59 0.00 0.00 19507.33 3991.74 20614.05 00:17:10.823 0 00:17:10.823 00:51:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.823 00:51:23 -- target/tls.sh@45 -- # killprocess 89225 00:17:10.823 00:51:23 -- common/autotest_common.sh@936 -- # '[' -z 89225 ']' 00:17:10.823 00:51:23 -- common/autotest_common.sh@940 -- # kill -0 89225 00:17:10.823 00:51:23 -- common/autotest_common.sh@941 -- # uname 00:17:10.823 00:51:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.823 00:51:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89225 00:17:10.823 00:51:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:10.823 00:51:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:10.823 killing process with pid 89225 00:17:10.823 00:51:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89225' 00:17:10.823 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.823 00:17:10.823 Latency(us) 00:17:10.823 [2024-12-03T00:51:23.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.823 [2024-12-03T00:51:23.338Z] =================================================================================================================== 00:17:10.823 [2024-12-03T00:51:23.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.823 00:51:23 -- common/autotest_common.sh@955 -- # kill 89225 00:17:10.823 00:51:23 -- common/autotest_common.sh@960 -- # wait 89225 00:17:11.083 00:51:23 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.083 00:51:23 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.083 00:51:23 -- common/autotest_common.sh@650 -- # local es=0 00:17:11.083 00:51:23 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.083 00:51:23 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:11.083 00:51:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.083 00:51:23 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:11.083 00:51:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.083 00:51:23 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.083 00:51:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.083 00:51:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.083 00:51:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:11.083 00:51:23 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:11.083 00:51:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.083 00:51:23 -- target/tls.sh@28 -- # bdevperf_pid=89373 00:17:11.083 00:51:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.083 00:51:23 -- target/tls.sh@31 -- # waitforlisten 89373 /var/tmp/bdevperf.sock 00:17:11.083 00:51:23 -- common/autotest_common.sh@829 -- # '[' -z 89373 ']' 00:17:11.083 00:51:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.083 00:51:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.083 00:51:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.083 00:51:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.083 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:17:11.083 00:51:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.083 [2024-12-03 00:51:23.549214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:11.083 [2024-12-03 00:51:23.549308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89373 ] 00:17:11.342 [2024-12-03 00:51:23.687768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.342 [2024-12-03 00:51:23.751494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.278 00:51:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.278 00:51:24 -- common/autotest_common.sh@862 -- # return 0 00:17:12.278 00:51:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.538 [2024-12-03 00:51:24.803568] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.538 [2024-12-03 00:51:24.803613] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:12.538 2024/12/03 00:51:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.538 request: 00:17:12.538 { 00:17:12.538 "method": "bdev_nvme_attach_controller", 00:17:12.538 "params": { 00:17:12.538 "name": "TLSTEST", 00:17:12.538 "trtype": "tcp", 00:17:12.538 "traddr": "10.0.0.2", 00:17:12.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.538 "adrfam": "ipv4", 00:17:12.538 "trsvcid": "4420", 00:17:12.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.538 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:12.538 } 00:17:12.538 } 00:17:12.538 Got JSON-RPC error response 00:17:12.538 GoRPCClient: error on JSON-RPC call 00:17:12.538 00:51:24 -- target/tls.sh@36 -- # killprocess 89373 00:17:12.538 00:51:24 -- common/autotest_common.sh@936 -- # '[' -z 89373 ']' 00:17:12.538 00:51:24 -- common/autotest_common.sh@940 -- # kill -0 89373 00:17:12.538 00:51:24 -- common/autotest_common.sh@941 -- # uname 00:17:12.538 00:51:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.538 00:51:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89373 00:17:12.538 00:51:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:12.538 00:51:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:12.538 00:51:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89373' 00:17:12.538 killing process with pid 89373 00:17:12.538 00:51:24 -- common/autotest_common.sh@955 -- # kill 89373 00:17:12.538 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.538 00:17:12.538 Latency(us) 00:17:12.538 [2024-12-03T00:51:25.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.538 [2024-12-03T00:51:25.053Z] =================================================================================================================== 00:17:12.538 [2024-12-03T00:51:25.053Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.538 00:51:24 -- common/autotest_common.sh@960 -- # wait 89373 00:17:12.538 00:51:25 -- target/tls.sh@37 -- # return 1 00:17:12.538 00:51:25 -- common/autotest_common.sh@653 -- # es=1 00:17:12.538 00:51:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.538 00:51:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.538 00:51:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.538 00:51:25 -- target/tls.sh@183 -- # killprocess 89118 00:17:12.538 00:51:25 -- common/autotest_common.sh@936 -- # '[' -z 89118 ']' 00:17:12.538 00:51:25 -- common/autotest_common.sh@940 -- # kill -0 89118 00:17:12.538 00:51:25 -- common/autotest_common.sh@941 -- # uname 00:17:12.538 00:51:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.538 00:51:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89118 00:17:12.795 00:51:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:12.795 00:51:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:12.795 killing process with pid 89118 00:17:12.795 00:51:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89118' 00:17:12.795 00:51:25 -- common/autotest_common.sh@955 -- # kill 89118 00:17:12.795 00:51:25 -- common/autotest_common.sh@960 -- # wait 89118 00:17:13.053 00:51:25 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:13.053 00:51:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:13.053 00:51:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.053 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:17:13.053 00:51:25 -- nvmf/common.sh@469 -- # nvmfpid=89424 00:17:13.053 00:51:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:13.053 00:51:25 -- nvmf/common.sh@470 -- # waitforlisten 89424 00:17:13.053 00:51:25 -- common/autotest_common.sh@829 -- # '[' -z 89424 ']' 00:17:13.053 00:51:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.053 00:51:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.053 00:51:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.053 00:51:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.053 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:17:13.053 [2024-12-03 00:51:25.372094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:13.053 [2024-12-03 00:51:25.372178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.053 [2024-12-03 00:51:25.505611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.310 [2024-12-03 00:51:25.575369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:13.310 [2024-12-03 00:51:25.575535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.310 [2024-12-03 00:51:25.575548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.310 [2024-12-03 00:51:25.575556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.310 [2024-12-03 00:51:25.575581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.878 00:51:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.878 00:51:26 -- common/autotest_common.sh@862 -- # return 0 00:17:13.878 00:51:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.878 00:51:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.878 00:51:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.137 00:51:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.137 00:51:26 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.137 00:51:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.137 00:51:26 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.137 00:51:26 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:14.137 00:51:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.137 00:51:26 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:14.137 00:51:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.137 00:51:26 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.137 00:51:26 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.137 00:51:26 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:14.396 [2024-12-03 00:51:26.652224] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.396 00:51:26 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:14.396 00:51:26 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:14.669 [2024-12-03 00:51:27.060282] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.669 [2024-12-03 00:51:27.060507] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.669 00:51:27 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:14.957 malloc0 00:17:14.957 00:51:27 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:15.231 00:51:27 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.231 [2024-12-03 00:51:27.662041] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:15.231 [2024-12-03 00:51:27.662067] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:15.231 [2024-12-03 00:51:27.662083] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:15.231 2024/12/03 00:51:27 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:15.231 request: 00:17:15.231 { 00:17:15.231 "method": "nvmf_subsystem_add_host", 00:17:15.231 "params": { 00:17:15.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.231 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.231 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:15.231 } 00:17:15.231 } 00:17:15.231 Got JSON-RPC error response 00:17:15.231 GoRPCClient: error on JSON-RPC call 00:17:15.231 00:51:27 -- common/autotest_common.sh@653 -- # es=1 00:17:15.231 00:51:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.231 00:51:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.231 00:51:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.231 00:51:27 -- target/tls.sh@189 -- # killprocess 89424 00:17:15.231 00:51:27 -- common/autotest_common.sh@936 -- # '[' -z 89424 ']' 00:17:15.231 00:51:27 -- common/autotest_common.sh@940 -- # kill -0 89424 00:17:15.231 00:51:27 -- common/autotest_common.sh@941 -- # uname 00:17:15.231 00:51:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.231 00:51:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89424 00:17:15.231 00:51:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:15.231 00:51:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:15.231 killing process with pid 89424 00:17:15.231 00:51:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89424' 00:17:15.231 00:51:27 -- common/autotest_common.sh@955 -- # kill 89424 00:17:15.231 00:51:27 -- common/autotest_common.sh@960 -- # wait 89424 00:17:15.490 00:51:27 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.490 00:51:27 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:15.490 00:51:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.490 00:51:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.490 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:17:15.490 00:51:27 -- nvmf/common.sh@469 -- # nvmfpid=89540 00:17:15.490 00:51:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.490 00:51:27 -- nvmf/common.sh@470 -- # waitforlisten 89540 00:17:15.490 00:51:27 -- common/autotest_common.sh@829 -- # '[' -z 89540 ']' 00:17:15.490 00:51:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.490 00:51:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.490 00:51:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.490 00:51:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.490 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:17:15.748 [2024-12-03 00:51:28.034472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:15.748 [2024-12-03 00:51:28.034568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.748 [2024-12-03 00:51:28.173593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.748 [2024-12-03 00:51:28.245662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.748 [2024-12-03 00:51:28.245809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.748 [2024-12-03 00:51:28.245821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.748 [2024-12-03 00:51:28.245829] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.748 [2024-12-03 00:51:28.245861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.683 00:51:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.683 00:51:28 -- common/autotest_common.sh@862 -- # return 0 00:17:16.683 00:51:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.683 00:51:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.683 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:17:16.683 00:51:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.683 00:51:28 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.683 00:51:28 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.684 00:51:28 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.942 [2024-12-03 00:51:29.236691] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.942 00:51:29 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:17.201 00:51:29 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:17.459 [2024-12-03 00:51:29.784778] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.459 [2024-12-03 00:51:29.785008] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.459 00:51:29 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.716 malloc0 00:17:17.716 00:51:30 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.975 00:51:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.235 00:51:30 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.235 00:51:30 -- target/tls.sh@197 -- # bdevperf_pid=89637 00:17:18.235 00:51:30 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.235 00:51:30 -- target/tls.sh@200 -- # waitforlisten 89637 /var/tmp/bdevperf.sock 00:17:18.235 00:51:30 -- common/autotest_common.sh@829 -- # '[' -z 89637 ']' 00:17:18.235 00:51:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.235 00:51:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.235 00:51:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.235 00:51:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.235 00:51:30 -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 [2024-12-03 00:51:30.615615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.235 [2024-12-03 00:51:30.615691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89637 ] 00:17:18.494 [2024-12-03 00:51:30.752806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.494 [2024-12-03 00:51:30.826328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.431 00:51:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.431 00:51:31 -- common/autotest_common.sh@862 -- # return 0 00:17:19.431 00:51:31 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.431 [2024-12-03 00:51:31.794466] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.431 TLSTESTn1 00:17:19.431 00:51:31 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:19.690 00:51:32 -- target/tls.sh@205 -- # tgtconf='{ 00:17:19.690 "subsystems": [ 00:17:19.690 { 00:17:19.690 "subsystem": "iobuf", 00:17:19.690 "config": [ 00:17:19.690 { 00:17:19.690 "method": "iobuf_set_options", 00:17:19.690 "params": { 00:17:19.690 "large_bufsize": 135168, 00:17:19.690 "large_pool_count": 1024, 00:17:19.690 "small_bufsize": 8192, 00:17:19.690 "small_pool_count": 8192 00:17:19.690 } 00:17:19.690 } 00:17:19.690 ] 00:17:19.690 }, 00:17:19.690 { 00:17:19.690 "subsystem": "sock", 00:17:19.690 "config": [ 00:17:19.690 { 00:17:19.690 "method": "sock_impl_set_options", 00:17:19.690 "params": { 00:17:19.690 "enable_ktls": false, 00:17:19.690 "enable_placement_id": 0, 00:17:19.690 "enable_quickack": false, 00:17:19.690 "enable_recv_pipe": true, 00:17:19.690 "enable_zerocopy_send_client": false, 00:17:19.690 "enable_zerocopy_send_server": true, 00:17:19.690 "impl_name": "posix", 00:17:19.690 "recv_buf_size": 2097152, 00:17:19.690 "send_buf_size": 2097152, 00:17:19.690 "tls_version": 0, 00:17:19.690 "zerocopy_threshold": 0 00:17:19.690 } 00:17:19.690 }, 00:17:19.690 { 00:17:19.690 "method": "sock_impl_set_options", 00:17:19.690 "params": { 00:17:19.690 "enable_ktls": false, 00:17:19.690 "enable_placement_id": 0, 00:17:19.690 "enable_quickack": false, 00:17:19.690 "enable_recv_pipe": true, 00:17:19.690 "enable_zerocopy_send_client": false, 00:17:19.690 "enable_zerocopy_send_server": true, 00:17:19.690 "impl_name": "ssl", 00:17:19.690 "recv_buf_size": 4096, 00:17:19.690 "send_buf_size": 4096, 00:17:19.690 "tls_version": 0, 00:17:19.690 "zerocopy_threshold": 0 00:17:19.690 } 00:17:19.690 } 00:17:19.690 ] 00:17:19.690 }, 00:17:19.690 { 00:17:19.690 "subsystem": "vmd", 00:17:19.690 "config": [] 00:17:19.690 }, 00:17:19.691 { 00:17:19.691 "subsystem": "accel", 00:17:19.691 "config": [ 00:17:19.691 { 00:17:19.691 "method": "accel_set_options", 00:17:19.691 "params": { 00:17:19.691 "buf_count": 2048, 00:17:19.691 "large_cache_size": 16, 00:17:19.691 "sequence_count": 2048, 00:17:19.691 "small_cache_size": 128, 00:17:19.691 "task_count": 2048 00:17:19.691 } 00:17:19.691 } 00:17:19.691 ] 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "subsystem": "bdev", 00:17:19.691 "config": [ 00:17:19.691 { 00:17:19.691 "method": "bdev_set_options", 00:17:19.691 "params": { 00:17:19.691 "bdev_auto_examine": true, 00:17:19.691 "bdev_io_cache_size": 256, 00:17:19.691 "bdev_io_pool_size": 65535, 00:17:19.691 "iobuf_large_cache_size": 16, 00:17:19.691 "iobuf_small_cache_size": 128 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "bdev_raid_set_options", 00:17:19.691 "params": { 00:17:19.691 "process_window_size_kb": 1024 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "bdev_iscsi_set_options", 00:17:19.691 "params": { 00:17:19.691 "timeout_sec": 30 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "bdev_nvme_set_options", 00:17:19.691 "params": { 00:17:19.691 "action_on_timeout": "none", 00:17:19.691 "allow_accel_sequence": false, 00:17:19.691 "arbitration_burst": 0, 00:17:19.691 "bdev_retry_count": 3, 00:17:19.691 "ctrlr_loss_timeout_sec": 0, 00:17:19.691 "delay_cmd_submit": true, 00:17:19.691 "fast_io_fail_timeout_sec": 0, 00:17:19.691 "generate_uuids": false, 00:17:19.691 "high_priority_weight": 0, 00:17:19.691 "io_path_stat": false, 00:17:19.691 "io_queue_requests": 0, 00:17:19.691 "keep_alive_timeout_ms": 10000, 00:17:19.691 "low_priority_weight": 0, 00:17:19.691 "medium_priority_weight": 0, 00:17:19.691 "nvme_adminq_poll_period_us": 10000, 00:17:19.691 "nvme_ioq_poll_period_us": 0, 00:17:19.691 "reconnect_delay_sec": 0, 00:17:19.691 "timeout_admin_us": 0, 00:17:19.691 "timeout_us": 0, 00:17:19.691 "transport_ack_timeout": 0, 00:17:19.691 "transport_retry_count": 4, 00:17:19.691 "transport_tos": 0 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "bdev_nvme_set_hotplug", 00:17:19.691 "params": { 00:17:19.691 "enable": false, 00:17:19.691 "period_us": 100000 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "bdev_malloc_create", 00:17:19.691 "params": { 00:17:19.691 "block_size": 4096, 00:17:19.691 "name": "malloc0", 00:17:19.691 "num_blocks": 8192, 00:17:19.691 "optimal_io_boundary": 0, 00:17:19.691 "physical_block_size": 4096, 00:17:19.691 "uuid": "6ba7be04-721f-4e3a-8540-c78bee5f8cdd" 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "bdev_wait_for_examine" 00:17:19.691 } 00:17:19.691 ] 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "subsystem": "nbd", 00:17:19.691 "config": [] 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "subsystem": "scheduler", 00:17:19.691 "config": [ 00:17:19.691 { 00:17:19.691 "method": "framework_set_scheduler", 00:17:19.691 "params": { 00:17:19.691 "name": "static" 00:17:19.691 } 00:17:19.691 } 00:17:19.691 ] 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "subsystem": "nvmf", 00:17:19.691 "config": [ 00:17:19.691 { 00:17:19.691 "method": "nvmf_set_config", 00:17:19.691 "params": { 00:17:19.691 "admin_cmd_passthru": { 00:17:19.691 "identify_ctrlr": false 00:17:19.691 }, 00:17:19.691 "discovery_filter": "match_any" 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_set_max_subsystems", 00:17:19.691 "params": { 00:17:19.691 "max_subsystems": 1024 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_set_crdt", 00:17:19.691 "params": { 00:17:19.691 "crdt1": 0, 00:17:19.691 "crdt2": 0, 00:17:19.691 "crdt3": 0 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_create_transport", 00:17:19.691 "params": { 00:17:19.691 "abort_timeout_sec": 1, 00:17:19.691 "buf_cache_size": 4294967295, 00:17:19.691 "c2h_success": false, 00:17:19.691 "dif_insert_or_strip": false, 00:17:19.691 "in_capsule_data_size": 4096, 00:17:19.691 "io_unit_size": 131072, 00:17:19.691 "max_aq_depth": 128, 00:17:19.691 "max_io_qpairs_per_ctrlr": 127, 00:17:19.691 "max_io_size": 131072, 00:17:19.691 "max_queue_depth": 128, 00:17:19.691 "num_shared_buffers": 511, 00:17:19.691 "sock_priority": 0, 00:17:19.691 "trtype": "TCP", 00:17:19.691 "zcopy": false 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_create_subsystem", 00:17:19.691 "params": { 00:17:19.691 "allow_any_host": false, 00:17:19.691 "ana_reporting": false, 00:17:19.691 "max_cntlid": 65519, 00:17:19.691 "max_namespaces": 10, 00:17:19.691 "min_cntlid": 1, 00:17:19.691 "model_number": "SPDK bdev Controller", 00:17:19.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.691 "serial_number": "SPDK00000000000001" 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_subsystem_add_host", 00:17:19.691 "params": { 00:17:19.691 "host": "nqn.2016-06.io.spdk:host1", 00:17:19.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.691 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_subsystem_add_ns", 00:17:19.691 "params": { 00:17:19.691 "namespace": { 00:17:19.691 "bdev_name": "malloc0", 00:17:19.691 "nguid": "6BA7BE04721F4E3A8540C78BEE5F8CDD", 00:17:19.691 "nsid": 1, 00:17:19.691 "uuid": "6ba7be04-721f-4e3a-8540-c78bee5f8cdd" 00:17:19.691 }, 00:17:19.691 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:19.691 } 00:17:19.691 }, 00:17:19.691 { 00:17:19.691 "method": "nvmf_subsystem_add_listener", 00:17:19.691 "params": { 00:17:19.691 "listen_address": { 00:17:19.691 "adrfam": "IPv4", 00:17:19.691 "traddr": "10.0.0.2", 00:17:19.691 "trsvcid": "4420", 00:17:19.691 "trtype": "TCP" 00:17:19.691 }, 00:17:19.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.691 "secure_channel": true 00:17:19.691 } 00:17:19.691 } 00:17:19.691 ] 00:17:19.691 } 00:17:19.691 ] 00:17:19.691 }' 00:17:19.691 00:51:32 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:20.258 00:51:32 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:20.258 "subsystems": [ 00:17:20.258 { 00:17:20.258 "subsystem": "iobuf", 00:17:20.258 "config": [ 00:17:20.258 { 00:17:20.258 "method": "iobuf_set_options", 00:17:20.258 "params": { 00:17:20.258 "large_bufsize": 135168, 00:17:20.258 "large_pool_count": 1024, 00:17:20.258 "small_bufsize": 8192, 00:17:20.258 "small_pool_count": 8192 00:17:20.258 } 00:17:20.258 } 00:17:20.258 ] 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "subsystem": "sock", 00:17:20.258 "config": [ 00:17:20.258 { 00:17:20.258 "method": "sock_impl_set_options", 00:17:20.258 "params": { 00:17:20.258 "enable_ktls": false, 00:17:20.258 "enable_placement_id": 0, 00:17:20.258 "enable_quickack": false, 00:17:20.258 "enable_recv_pipe": true, 00:17:20.258 "enable_zerocopy_send_client": false, 00:17:20.258 "enable_zerocopy_send_server": true, 00:17:20.258 "impl_name": "posix", 00:17:20.258 "recv_buf_size": 2097152, 00:17:20.258 "send_buf_size": 2097152, 00:17:20.258 "tls_version": 0, 00:17:20.258 "zerocopy_threshold": 0 00:17:20.258 } 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "method": "sock_impl_set_options", 00:17:20.258 "params": { 00:17:20.258 "enable_ktls": false, 00:17:20.258 "enable_placement_id": 0, 00:17:20.258 "enable_quickack": false, 00:17:20.258 "enable_recv_pipe": true, 00:17:20.258 "enable_zerocopy_send_client": false, 00:17:20.258 "enable_zerocopy_send_server": true, 00:17:20.258 "impl_name": "ssl", 00:17:20.258 "recv_buf_size": 4096, 00:17:20.258 "send_buf_size": 4096, 00:17:20.258 "tls_version": 0, 00:17:20.258 "zerocopy_threshold": 0 00:17:20.258 } 00:17:20.258 } 00:17:20.258 ] 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "subsystem": "vmd", 00:17:20.258 "config": [] 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "subsystem": "accel", 00:17:20.258 "config": [ 00:17:20.258 { 00:17:20.258 "method": "accel_set_options", 00:17:20.258 "params": { 00:17:20.258 "buf_count": 2048, 00:17:20.258 "large_cache_size": 16, 00:17:20.258 "sequence_count": 2048, 00:17:20.258 "small_cache_size": 128, 00:17:20.258 "task_count": 2048 00:17:20.258 } 00:17:20.258 } 00:17:20.258 ] 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "subsystem": "bdev", 00:17:20.258 "config": [ 00:17:20.258 { 00:17:20.258 "method": "bdev_set_options", 00:17:20.258 "params": { 00:17:20.258 "bdev_auto_examine": true, 00:17:20.258 "bdev_io_cache_size": 256, 00:17:20.258 "bdev_io_pool_size": 65535, 00:17:20.258 "iobuf_large_cache_size": 16, 00:17:20.258 "iobuf_small_cache_size": 128 00:17:20.258 } 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "method": "bdev_raid_set_options", 00:17:20.258 "params": { 00:17:20.258 "process_window_size_kb": 1024 00:17:20.258 } 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "method": "bdev_iscsi_set_options", 00:17:20.258 "params": { 00:17:20.258 "timeout_sec": 30 00:17:20.258 } 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "method": "bdev_nvme_set_options", 00:17:20.258 "params": { 00:17:20.258 "action_on_timeout": "none", 00:17:20.258 "allow_accel_sequence": false, 00:17:20.258 "arbitration_burst": 0, 00:17:20.258 "bdev_retry_count": 3, 00:17:20.258 "ctrlr_loss_timeout_sec": 0, 00:17:20.258 "delay_cmd_submit": true, 00:17:20.258 "fast_io_fail_timeout_sec": 0, 00:17:20.258 "generate_uuids": false, 00:17:20.258 "high_priority_weight": 0, 00:17:20.258 "io_path_stat": false, 00:17:20.258 "io_queue_requests": 512, 00:17:20.258 "keep_alive_timeout_ms": 10000, 00:17:20.258 "low_priority_weight": 0, 00:17:20.258 "medium_priority_weight": 0, 00:17:20.258 "nvme_adminq_poll_period_us": 10000, 00:17:20.258 "nvme_ioq_poll_period_us": 0, 00:17:20.258 "reconnect_delay_sec": 0, 00:17:20.258 "timeout_admin_us": 0, 00:17:20.258 "timeout_us": 0, 00:17:20.258 "transport_ack_timeout": 0, 00:17:20.258 "transport_retry_count": 4, 00:17:20.258 "transport_tos": 0 00:17:20.258 } 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "method": "bdev_nvme_attach_controller", 00:17:20.258 "params": { 00:17:20.258 "adrfam": "IPv4", 00:17:20.258 "ctrlr_loss_timeout_sec": 0, 00:17:20.258 "ddgst": false, 00:17:20.258 "fast_io_fail_timeout_sec": 0, 00:17:20.258 "hdgst": false, 00:17:20.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.258 "name": "TLSTEST", 00:17:20.258 "prchk_guard": false, 00:17:20.259 "prchk_reftag": false, 00:17:20.259 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:20.259 "reconnect_delay_sec": 0, 00:17:20.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.259 "traddr": "10.0.0.2", 00:17:20.259 "trsvcid": "4420", 00:17:20.259 "trtype": "TCP" 00:17:20.259 } 00:17:20.259 }, 00:17:20.259 { 00:17:20.259 "method": "bdev_nvme_set_hotplug", 00:17:20.259 "params": { 00:17:20.259 "enable": false, 00:17:20.259 "period_us": 100000 00:17:20.259 } 00:17:20.259 }, 00:17:20.259 { 00:17:20.259 "method": "bdev_wait_for_examine" 00:17:20.259 } 00:17:20.259 ] 00:17:20.259 }, 00:17:20.259 { 00:17:20.259 "subsystem": "nbd", 00:17:20.259 "config": [] 00:17:20.259 } 00:17:20.259 ] 00:17:20.259 }' 00:17:20.259 00:51:32 -- target/tls.sh@208 -- # killprocess 89637 00:17:20.259 00:51:32 -- common/autotest_common.sh@936 -- # '[' -z 89637 ']' 00:17:20.259 00:51:32 -- common/autotest_common.sh@940 -- # kill -0 89637 00:17:20.259 00:51:32 -- common/autotest_common.sh@941 -- # uname 00:17:20.259 00:51:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.259 00:51:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89637 00:17:20.259 killing process with pid 89637 00:17:20.259 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.259 00:17:20.259 Latency(us) 00:17:20.259 [2024-12-03T00:51:32.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.259 [2024-12-03T00:51:32.774Z] =================================================================================================================== 00:17:20.259 [2024-12-03T00:51:32.774Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.259 00:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:20.259 00:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:20.259 00:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89637' 00:17:20.259 00:51:32 -- common/autotest_common.sh@955 -- # kill 89637 00:17:20.259 00:51:32 -- common/autotest_common.sh@960 -- # wait 89637 00:17:20.259 00:51:32 -- target/tls.sh@209 -- # killprocess 89540 00:17:20.259 00:51:32 -- common/autotest_common.sh@936 -- # '[' -z 89540 ']' 00:17:20.259 00:51:32 -- common/autotest_common.sh@940 -- # kill -0 89540 00:17:20.259 00:51:32 -- common/autotest_common.sh@941 -- # uname 00:17:20.259 00:51:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.259 00:51:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89540 00:17:20.259 killing process with pid 89540 00:17:20.259 00:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:20.259 00:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:20.259 00:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89540' 00:17:20.259 00:51:32 -- common/autotest_common.sh@955 -- # kill 89540 00:17:20.259 00:51:32 -- common/autotest_common.sh@960 -- # wait 89540 00:17:20.517 00:51:33 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:20.517 00:51:33 -- target/tls.sh@212 -- # echo '{ 00:17:20.517 "subsystems": [ 00:17:20.517 { 00:17:20.517 "subsystem": "iobuf", 00:17:20.517 "config": [ 00:17:20.517 { 00:17:20.517 "method": "iobuf_set_options", 00:17:20.517 "params": { 00:17:20.517 "large_bufsize": 135168, 00:17:20.517 "large_pool_count": 1024, 00:17:20.517 "small_bufsize": 8192, 00:17:20.517 "small_pool_count": 8192 00:17:20.517 } 00:17:20.517 } 00:17:20.517 ] 00:17:20.517 }, 00:17:20.517 { 00:17:20.517 "subsystem": "sock", 00:17:20.517 "config": [ 00:17:20.517 { 00:17:20.517 "method": "sock_impl_set_options", 00:17:20.517 "params": { 00:17:20.517 "enable_ktls": false, 00:17:20.517 "enable_placement_id": 0, 00:17:20.517 "enable_quickack": false, 00:17:20.518 "enable_recv_pipe": true, 00:17:20.518 "enable_zerocopy_send_client": false, 00:17:20.518 "enable_zerocopy_send_server": true, 00:17:20.518 "impl_name": "posix", 00:17:20.518 "recv_buf_size": 2097152, 00:17:20.518 "send_buf_size": 2097152, 00:17:20.518 "tls_version": 0, 00:17:20.518 "zerocopy_threshold": 0 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "sock_impl_set_options", 00:17:20.518 "params": { 00:17:20.518 "enable_ktls": false, 00:17:20.518 "enable_placement_id": 0, 00:17:20.518 "enable_quickack": false, 00:17:20.518 "enable_recv_pipe": true, 00:17:20.518 "enable_zerocopy_send_client": false, 00:17:20.518 "enable_zerocopy_send_server": true, 00:17:20.518 "impl_name": "ssl", 00:17:20.518 "recv_buf_size": 4096, 00:17:20.518 "send_buf_size": 4096, 00:17:20.518 "tls_version": 0, 00:17:20.518 "zerocopy_threshold": 0 00:17:20.518 } 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "subsystem": "vmd", 00:17:20.518 "config": [] 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "subsystem": "accel", 00:17:20.518 "config": [ 00:17:20.518 { 00:17:20.518 "method": "accel_set_options", 00:17:20.518 "params": { 00:17:20.518 "buf_count": 2048, 00:17:20.518 "large_cache_size": 16, 00:17:20.518 "sequence_count": 2048, 00:17:20.518 "small_cache_size": 128, 00:17:20.518 "task_count": 2048 00:17:20.518 } 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "subsystem": "bdev", 00:17:20.518 "config": [ 00:17:20.518 { 00:17:20.518 "method": "bdev_set_options", 00:17:20.518 "params": { 00:17:20.518 "bdev_auto_examine": true, 00:17:20.518 "bdev_io_cache_size": 256, 00:17:20.518 "bdev_io_pool_size": 65535, 00:17:20.518 "iobuf_large_cache_size": 16, 00:17:20.518 "iobuf_small_cache_size": 128 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "bdev_raid_set_options", 00:17:20.518 "params": { 00:17:20.518 "process_window_size_kb": 1024 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "bdev_iscsi_set_options", 00:17:20.518 "params": { 00:17:20.518 "timeout_sec": 30 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "bdev_nvme_set_options", 00:17:20.518 "params": { 00:17:20.518 "action_on_timeout": "none", 00:17:20.518 "allow_accel_sequence": false, 00:17:20.518 "arbitration_burst": 0, 00:17:20.518 "bdev_retry_count": 3, 00:17:20.518 "ctrlr_loss_timeout_sec": 0, 00:17:20.518 "delay_cmd_submit": true, 00:17:20.518 "fast_io_fail_timeout_sec": 0, 00:17:20.518 "generate_uuids": false, 00:17:20.518 "high_priority_weight": 0, 00:17:20.518 "io_path_stat": false, 00:17:20.518 "io_queue_requests": 0, 00:17:20.518 "keep_alive_timeout_ms": 10000, 00:17:20.518 "low_priority_weight": 0, 00:17:20.518 "medium_priority_weight": 0, 00:17:20.518 "nvme_adminq_poll_period_us": 10000, 00:17:20.518 "nvme_ioq_poll_period_us": 0, 00:17:20.518 "reconnect_delay_sec": 0, 00:17:20.518 "timeout_admin_us": 0, 00:17:20.518 "timeout_us": 0, 00:17:20.518 "transport_ack_timeout": 0, 00:17:20.518 "transport_retry_count": 4, 00:17:20.518 "transport_tos": 0 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "bdev_nvme_set_hotplug", 00:17:20.518 "params": { 00:17:20.518 "enable": false, 00:17:20.518 "period_us": 100000 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "bdev_malloc_create", 00:17:20.518 "params": { 00:17:20.518 "block_size": 4096, 00:17:20.518 "name": "malloc0", 00:17:20.518 "num_blocks": 8192, 00:17:20.518 "optimal_io_boundary": 0, 00:17:20.518 "physical_block_size": 4096, 00:17:20.518 "uuid": "6ba7be04-721f-4e3a-8540-c78bee5f8cdd" 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "bdev_wait_for_examine" 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "subsystem": "nbd", 00:17:20.518 "config": [] 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "subsystem": "scheduler", 00:17:20.518 "config": [ 00:17:20.518 { 00:17:20.518 "method": "framework_set_scheduler", 00:17:20.518 "params": { 00:17:20.518 "name": "static" 00:17:20.518 } 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "subsystem": "nvmf", 00:17:20.518 "config": [ 00:17:20.518 { 00:17:20.518 "method": "nvmf_set_config", 00:17:20.518 "params": { 00:17:20.518 "admin_cmd_passthru": { 00:17:20.518 "identify_ctrlr": false 00:17:20.518 }, 00:17:20.518 "discovery_filter": "match_any" 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_set_max_subsystems", 00:17:20.518 "params": { 00:17:20.518 "max_subsystems": 1024 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_set_crdt", 00:17:20.518 "params": { 00:17:20.518 "crdt1": 0, 00:17:20.518 "crdt2": 0, 00:17:20.518 "crdt3": 0 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_create_transport", 00:17:20.518 "params": { 00:17:20.518 "abort_timeout_sec": 1, 00:17:20.518 "buf_cache_size": 4294967295, 00:17:20.518 "c2h_success": false, 00:17:20.518 "dif_insert_or_strip": false, 00:17:20.518 "in_capsule_data_size": 4096, 00:17:20.518 "io_unit_size": 131072, 00:17:20.518 "max_aq_depth": 128, 00:17:20.518 "max_io_qpairs_per_ctrlr": 127, 00:17:20.518 "max_io_size": 131072, 00:17:20.518 "max_queue_depth": 128, 00:17:20.518 "num_shared_buffers": 511, 00:17:20.518 "sock_priority": 0, 00:17:20.518 "trtype": "TCP", 00:17:20.518 "zcopy": false 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_create_subsystem", 00:17:20.518 "params": { 00:17:20.518 "allow_any_host": false, 00:17:20.518 "ana_reporting": false, 00:17:20.518 "max_cntlid": 65519, 00:17:20.518 "max_namespaces": 10, 00:17:20.518 "min_cntlid": 1, 00:17:20.518 "model_number": "SPDK bdev Controller", 00:17:20.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.518 "serial_number": "SPDK00000000000001" 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_subsystem_add_host", 00:17:20.518 "params": { 00:17:20.518 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.518 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_subsystem_add_ns", 00:17:20.518 "params": { 00:17:20.518 "namespace": { 00:17:20.518 "bdev_name": "malloc0", 00:17:20.518 "nguid": "6BA7BE04721F4E3A8540C78BEE5F8CDD", 00:17:20.518 "nsid": 1, 00:17:20.518 "uuid": "6ba7be04-721f-4e3a-8540-c78bee5f8cdd" 00:17:20.518 }, 00:17:20.518 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:20.518 } 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "method": "nvmf_subsystem_add_listener", 00:17:20.518 "params": { 00:17:20.518 "listen_address": { 00:17:20.518 "adrfam": "IPv4", 00:17:20.518 "traddr": "10.0.0.2", 00:17:20.518 "trsvcid": "4420", 00:17:20.518 "trtype": "TCP" 00:17:20.518 }, 00:17:20.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.518 "secure_channel": true 00:17:20.518 } 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 }' 00:17:20.518 00:51:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.518 00:51:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.518 00:51:33 -- common/autotest_common.sh@10 -- # set +x 00:17:20.519 00:51:33 -- nvmf/common.sh@469 -- # nvmfpid=89716 00:17:20.519 00:51:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:20.519 00:51:33 -- nvmf/common.sh@470 -- # waitforlisten 89716 00:17:20.519 00:51:33 -- common/autotest_common.sh@829 -- # '[' -z 89716 ']' 00:17:20.519 00:51:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.519 00:51:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.519 00:51:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.519 00:51:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.519 00:51:33 -- common/autotest_common.sh@10 -- # set +x 00:17:20.780 [2024-12-03 00:51:33.053742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.780 [2024-12-03 00:51:33.053826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.780 [2024-12-03 00:51:33.187290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.780 [2024-12-03 00:51:33.243012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.780 [2024-12-03 00:51:33.243186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.780 [2024-12-03 00:51:33.243198] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.780 [2024-12-03 00:51:33.243206] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.780 [2024-12-03 00:51:33.243234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.037 [2024-12-03 00:51:33.486636] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.037 [2024-12-03 00:51:33.518599] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:21.037 [2024-12-03 00:51:33.518835] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.604 00:51:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.604 00:51:33 -- common/autotest_common.sh@862 -- # return 0 00:17:21.604 00:51:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.604 00:51:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.604 00:51:33 -- common/autotest_common.sh@10 -- # set +x 00:17:21.604 00:51:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.604 00:51:34 -- target/tls.sh@216 -- # bdevperf_pid=89760 00:17:21.604 00:51:34 -- target/tls.sh@217 -- # waitforlisten 89760 /var/tmp/bdevperf.sock 00:17:21.604 00:51:34 -- common/autotest_common.sh@829 -- # '[' -z 89760 ']' 00:17:21.604 00:51:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.604 00:51:34 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:21.604 00:51:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.604 00:51:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.604 00:51:34 -- target/tls.sh@213 -- # echo '{ 00:17:21.604 "subsystems": [ 00:17:21.604 { 00:17:21.604 "subsystem": "iobuf", 00:17:21.604 "config": [ 00:17:21.604 { 00:17:21.604 "method": "iobuf_set_options", 00:17:21.604 "params": { 00:17:21.604 "large_bufsize": 135168, 00:17:21.604 "large_pool_count": 1024, 00:17:21.604 "small_bufsize": 8192, 00:17:21.604 "small_pool_count": 8192 00:17:21.604 } 00:17:21.604 } 00:17:21.604 ] 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "subsystem": "sock", 00:17:21.604 "config": [ 00:17:21.604 { 00:17:21.604 "method": "sock_impl_set_options", 00:17:21.604 "params": { 00:17:21.604 "enable_ktls": false, 00:17:21.604 "enable_placement_id": 0, 00:17:21.604 "enable_quickack": false, 00:17:21.604 "enable_recv_pipe": true, 00:17:21.604 "enable_zerocopy_send_client": false, 00:17:21.604 "enable_zerocopy_send_server": true, 00:17:21.604 "impl_name": "posix", 00:17:21.604 "recv_buf_size": 2097152, 00:17:21.604 "send_buf_size": 2097152, 00:17:21.604 "tls_version": 0, 00:17:21.604 "zerocopy_threshold": 0 00:17:21.604 } 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "method": "sock_impl_set_options", 00:17:21.604 "params": { 00:17:21.604 "enable_ktls": false, 00:17:21.604 "enable_placement_id": 0, 00:17:21.604 "enable_quickack": false, 00:17:21.604 "enable_recv_pipe": true, 00:17:21.604 "enable_zerocopy_send_client": false, 00:17:21.604 "enable_zerocopy_send_server": true, 00:17:21.604 "impl_name": "ssl", 00:17:21.604 "recv_buf_size": 4096, 00:17:21.604 "send_buf_size": 4096, 00:17:21.604 "tls_version": 0, 00:17:21.604 "zerocopy_threshold": 0 00:17:21.604 } 00:17:21.604 } 00:17:21.604 ] 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "subsystem": "vmd", 00:17:21.604 "config": [] 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "subsystem": "accel", 00:17:21.604 "config": [ 00:17:21.604 { 00:17:21.604 "method": "accel_set_options", 00:17:21.604 "params": { 00:17:21.604 "buf_count": 2048, 00:17:21.604 "large_cache_size": 16, 00:17:21.604 "sequence_count": 2048, 00:17:21.604 "small_cache_size": 128, 00:17:21.604 "task_count": 2048 00:17:21.604 } 00:17:21.604 } 00:17:21.604 ] 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "subsystem": "bdev", 00:17:21.604 "config": [ 00:17:21.604 { 00:17:21.604 "method": "bdev_set_options", 00:17:21.604 "params": { 00:17:21.604 "bdev_auto_examine": true, 00:17:21.604 "bdev_io_cache_size": 256, 00:17:21.604 "bdev_io_pool_size": 65535, 00:17:21.604 "iobuf_large_cache_size": 16, 00:17:21.604 "iobuf_small_cache_size": 128 00:17:21.604 } 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "method": "bdev_raid_set_options", 00:17:21.604 "params": { 00:17:21.604 "process_window_size_kb": 1024 00:17:21.604 } 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "method": "bdev_iscsi_set_options", 00:17:21.604 "params": { 00:17:21.604 "timeout_sec": 30 00:17:21.604 } 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "method": "bdev_nvme_set_options", 00:17:21.604 "params": { 00:17:21.604 "action_on_timeout": "none", 00:17:21.604 "allow_accel_sequence": false, 00:17:21.604 "arbitration_burst": 0, 00:17:21.604 "bdev_retry_count": 3, 00:17:21.604 "ctrlr_loss_timeout_sec": 0, 00:17:21.604 "delay_cmd_submit": true, 00:17:21.604 "fast_io_fail_timeout_sec": 0, 00:17:21.604 "generate_uuids": false, 00:17:21.604 "high_priority_weight": 0, 00:17:21.604 "io_path_stat": false, 00:17:21.604 "io_queue_requests": 512, 00:17:21.604 "keep_alive_timeout_ms": 10000, 00:17:21.604 "low_priority_weight": 0, 00:17:21.604 "medium_priority_weight": 0, 00:17:21.604 "nvme_adminq_poll_period_us": 10000, 00:17:21.604 "nvme_ioq_poll_period_us": 0, 00:17:21.604 "reconnect_delay_sec": 0, 00:17:21.604 "timeout_admin_us": 0, 00:17:21.604 "timeout_us": 0, 00:17:21.604 "transport_ack_timeout": 0, 00:17:21.604 "transport_retry_count": 4, 00:17:21.604 "transport_tos": 0 00:17:21.604 } 00:17:21.604 }, 00:17:21.604 { 00:17:21.604 "method": "bdev_nvme_attach_controller", 00:17:21.604 "params": { 00:17:21.604 "adrfam": "IPv4", 00:17:21.604 "ctrlr_loss_timeout_sec": 0, 00:17:21.605 "ddgst": false, 00:17:21.605 "fast_io_fail_timeout_sec": 0, 00:17:21.605 "hdgst": false, 00:17:21.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.605 "name": "TLSTEST", 00:17:21.605 "prchk_guard": false, 00:17:21.605 "prchk_reftag": false, 00:17:21.605 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:21.605 "reconnect_delay_sec": 0, 00:17:21.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.605 "traddr": "10.0.0.2", 00:17:21.605 "trsvcid": "4420", 00:17:21.605 "trtype": "TCP" 00:17:21.605 } 00:17:21.605 }, 00:17:21.605 { 00:17:21.605 "method": "bdev_nvme_set_hotplug", 00:17:21.605 "params": { 00:17:21.605 "enable": false, 00:17:21.605 "period_us": 100000 00:17:21.605 } 00:17:21.605 }, 00:17:21.605 { 00:17:21.605 "method": "bdev_wait_for_examine" 00:17:21.605 } 00:17:21.605 ] 00:17:21.605 }, 00:17:21.605 { 00:17:21.605 "subsystem": "nbd", 00:17:21.605 "config": [] 00:17:21.605 } 00:17:21.605 ] 00:17:21.605 }' 00:17:21.605 00:51:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.605 00:51:34 -- common/autotest_common.sh@10 -- # set +x 00:17:21.605 [2024-12-03 00:51:34.053099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:21.605 [2024-12-03 00:51:34.053189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89760 ] 00:17:21.864 [2024-12-03 00:51:34.194646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.864 [2024-12-03 00:51:34.252283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.129 [2024-12-03 00:51:34.401074] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.696 00:51:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.696 00:51:35 -- common/autotest_common.sh@862 -- # return 0 00:17:22.696 00:51:35 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:22.696 Running I/O for 10 seconds... 00:17:32.674 00:17:32.674 Latency(us) 00:17:32.674 [2024-12-03T00:51:45.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.674 [2024-12-03T00:51:45.189Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.674 Verification LBA range: start 0x0 length 0x2000 00:17:32.674 TLSTESTn1 : 10.01 6581.55 25.71 0.00 0.00 19419.64 3783.21 22997.18 00:17:32.674 [2024-12-03T00:51:45.189Z] =================================================================================================================== 00:17:32.674 [2024-12-03T00:51:45.189Z] Total : 6581.55 25.71 0.00 0.00 19419.64 3783.21 22997.18 00:17:32.674 0 00:17:32.674 00:51:45 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.674 00:51:45 -- target/tls.sh@223 -- # killprocess 89760 00:17:32.674 00:51:45 -- common/autotest_common.sh@936 -- # '[' -z 89760 ']' 00:17:32.674 00:51:45 -- common/autotest_common.sh@940 -- # kill -0 89760 00:17:32.674 00:51:45 -- common/autotest_common.sh@941 -- # uname 00:17:32.674 00:51:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.674 00:51:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89760 00:17:32.932 00:51:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:32.932 00:51:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:32.932 killing process with pid 89760 00:17:32.932 00:51:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89760' 00:17:32.932 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.932 00:17:32.932 Latency(us) 00:17:32.932 [2024-12-03T00:51:45.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.932 [2024-12-03T00:51:45.447Z] =================================================================================================================== 00:17:32.932 [2024-12-03T00:51:45.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.932 00:51:45 -- common/autotest_common.sh@955 -- # kill 89760 00:17:32.932 00:51:45 -- common/autotest_common.sh@960 -- # wait 89760 00:17:32.932 00:51:45 -- target/tls.sh@224 -- # killprocess 89716 00:17:32.932 00:51:45 -- common/autotest_common.sh@936 -- # '[' -z 89716 ']' 00:17:32.932 00:51:45 -- common/autotest_common.sh@940 -- # kill -0 89716 00:17:32.932 00:51:45 -- common/autotest_common.sh@941 -- # uname 00:17:32.932 00:51:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.932 00:51:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89716 00:17:32.932 00:51:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:32.932 00:51:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:32.932 killing process with pid 89716 00:17:32.932 00:51:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89716' 00:17:32.932 00:51:45 -- common/autotest_common.sh@955 -- # kill 89716 00:17:32.932 00:51:45 -- common/autotest_common.sh@960 -- # wait 89716 00:17:33.189 00:51:45 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:33.189 00:51:45 -- target/tls.sh@227 -- # cleanup 00:17:33.189 00:51:45 -- target/tls.sh@15 -- # process_shm --id 0 00:17:33.189 00:51:45 -- common/autotest_common.sh@806 -- # type=--id 00:17:33.189 00:51:45 -- common/autotest_common.sh@807 -- # id=0 00:17:33.189 00:51:45 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:33.189 00:51:45 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.189 00:51:45 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:33.189 00:51:45 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:33.189 00:51:45 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:33.189 00:51:45 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.189 nvmf_trace.0 00:17:33.447 00:51:45 -- common/autotest_common.sh@821 -- # return 0 00:17:33.447 00:51:45 -- target/tls.sh@16 -- # killprocess 89760 00:17:33.447 00:51:45 -- common/autotest_common.sh@936 -- # '[' -z 89760 ']' 00:17:33.447 00:51:45 -- common/autotest_common.sh@940 -- # kill -0 89760 00:17:33.447 Process with pid 89760 is not found 00:17:33.447 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89760) - No such process 00:17:33.447 00:51:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89760 is not found' 00:17:33.447 00:51:45 -- target/tls.sh@17 -- # nvmftestfini 00:17:33.447 00:51:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.447 00:51:45 -- nvmf/common.sh@116 -- # sync 00:17:33.447 00:51:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.447 00:51:45 -- nvmf/common.sh@119 -- # set +e 00:17:33.447 00:51:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.447 00:51:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.447 rmmod nvme_tcp 00:17:33.447 rmmod nvme_fabrics 00:17:33.447 rmmod nvme_keyring 00:17:33.447 00:51:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.447 00:51:45 -- nvmf/common.sh@123 -- # set -e 00:17:33.447 00:51:45 -- nvmf/common.sh@124 -- # return 0 00:17:33.447 00:51:45 -- nvmf/common.sh@477 -- # '[' -n 89716 ']' 00:17:33.447 00:51:45 -- nvmf/common.sh@478 -- # killprocess 89716 00:17:33.447 00:51:45 -- common/autotest_common.sh@936 -- # '[' -z 89716 ']' 00:17:33.447 00:51:45 -- common/autotest_common.sh@940 -- # kill -0 89716 00:17:33.447 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89716) - No such process 00:17:33.447 Process with pid 89716 is not found 00:17:33.447 00:51:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89716 is not found' 00:17:33.447 00:51:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.447 00:51:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.447 00:51:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.447 00:51:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.447 00:51:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.447 00:51:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.447 00:51:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.447 00:51:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.447 00:51:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:33.447 00:51:45 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.447 00:17:33.447 real 1m10.059s 00:17:33.447 user 1m44.029s 00:17:33.447 sys 0m27.165s 00:17:33.447 00:51:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:33.447 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:17:33.447 ************************************ 00:17:33.447 END TEST nvmf_tls 00:17:33.447 ************************************ 00:17:33.447 00:51:45 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.447 00:51:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:33.447 00:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.447 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:17:33.447 ************************************ 00:17:33.447 START TEST nvmf_fips 00:17:33.447 ************************************ 00:17:33.447 00:51:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.705 * Looking for test storage... 00:17:33.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:33.705 00:51:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:33.705 00:51:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:33.705 00:51:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:33.705 00:51:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:33.705 00:51:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:33.705 00:51:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:33.705 00:51:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:33.705 00:51:46 -- scripts/common.sh@335 -- # IFS=.-: 00:17:33.705 00:51:46 -- scripts/common.sh@335 -- # read -ra ver1 00:17:33.705 00:51:46 -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.705 00:51:46 -- scripts/common.sh@336 -- # read -ra ver2 00:17:33.705 00:51:46 -- scripts/common.sh@337 -- # local 'op=<' 00:17:33.705 00:51:46 -- scripts/common.sh@339 -- # ver1_l=2 00:17:33.705 00:51:46 -- scripts/common.sh@340 -- # ver2_l=1 00:17:33.705 00:51:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:33.705 00:51:46 -- scripts/common.sh@343 -- # case "$op" in 00:17:33.705 00:51:46 -- scripts/common.sh@344 -- # : 1 00:17:33.705 00:51:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:33.705 00:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.705 00:51:46 -- scripts/common.sh@364 -- # decimal 1 00:17:33.705 00:51:46 -- scripts/common.sh@352 -- # local d=1 00:17:33.705 00:51:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.705 00:51:46 -- scripts/common.sh@354 -- # echo 1 00:17:33.705 00:51:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:33.705 00:51:46 -- scripts/common.sh@365 -- # decimal 2 00:17:33.705 00:51:46 -- scripts/common.sh@352 -- # local d=2 00:17:33.705 00:51:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.705 00:51:46 -- scripts/common.sh@354 -- # echo 2 00:17:33.705 00:51:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:33.705 00:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:33.705 00:51:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:33.705 00:51:46 -- scripts/common.sh@367 -- # return 0 00:17:33.705 00:51:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.705 00:51:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.705 --rc genhtml_branch_coverage=1 00:17:33.705 --rc genhtml_function_coverage=1 00:17:33.705 --rc genhtml_legend=1 00:17:33.705 --rc geninfo_all_blocks=1 00:17:33.705 --rc geninfo_unexecuted_blocks=1 00:17:33.705 00:17:33.705 ' 00:17:33.705 00:51:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.705 --rc genhtml_branch_coverage=1 00:17:33.705 --rc genhtml_function_coverage=1 00:17:33.705 --rc genhtml_legend=1 00:17:33.705 --rc geninfo_all_blocks=1 00:17:33.705 --rc geninfo_unexecuted_blocks=1 00:17:33.705 00:17:33.705 ' 00:17:33.705 00:51:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.705 --rc genhtml_branch_coverage=1 00:17:33.705 --rc genhtml_function_coverage=1 00:17:33.705 --rc genhtml_legend=1 00:17:33.705 --rc geninfo_all_blocks=1 00:17:33.705 --rc geninfo_unexecuted_blocks=1 00:17:33.705 00:17:33.705 ' 00:17:33.705 00:51:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.705 --rc genhtml_branch_coverage=1 00:17:33.705 --rc genhtml_function_coverage=1 00:17:33.705 --rc genhtml_legend=1 00:17:33.705 --rc geninfo_all_blocks=1 00:17:33.705 --rc geninfo_unexecuted_blocks=1 00:17:33.705 00:17:33.705 ' 00:17:33.705 00:51:46 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.705 00:51:46 -- nvmf/common.sh@7 -- # uname -s 00:17:33.705 00:51:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.705 00:51:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.705 00:51:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.705 00:51:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.705 00:51:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.705 00:51:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.705 00:51:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.705 00:51:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.705 00:51:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.705 00:51:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.705 00:51:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:17:33.705 00:51:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:17:33.705 00:51:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.705 00:51:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.705 00:51:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.705 00:51:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.705 00:51:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.705 00:51:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.705 00:51:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.705 00:51:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.705 00:51:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.705 00:51:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.705 00:51:46 -- paths/export.sh@5 -- # export PATH 00:17:33.705 00:51:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.705 00:51:46 -- nvmf/common.sh@46 -- # : 0 00:17:33.705 00:51:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:33.705 00:51:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:33.705 00:51:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:33.705 00:51:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.705 00:51:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.705 00:51:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:33.705 00:51:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:33.705 00:51:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:33.705 00:51:46 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.705 00:51:46 -- fips/fips.sh@89 -- # check_openssl_version 00:17:33.705 00:51:46 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:33.705 00:51:46 -- fips/fips.sh@85 -- # openssl version 00:17:33.705 00:51:46 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:33.705 00:51:46 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:33.705 00:51:46 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:33.705 00:51:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:33.705 00:51:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:33.705 00:51:46 -- scripts/common.sh@335 -- # IFS=.-: 00:17:33.705 00:51:46 -- scripts/common.sh@335 -- # read -ra ver1 00:17:33.705 00:51:46 -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.705 00:51:46 -- scripts/common.sh@336 -- # read -ra ver2 00:17:33.705 00:51:46 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:33.705 00:51:46 -- scripts/common.sh@339 -- # ver1_l=3 00:17:33.705 00:51:46 -- scripts/common.sh@340 -- # ver2_l=3 00:17:33.705 00:51:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:33.705 00:51:46 -- scripts/common.sh@343 -- # case "$op" in 00:17:33.705 00:51:46 -- scripts/common.sh@347 -- # : 1 00:17:33.705 00:51:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:33.705 00:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.705 00:51:46 -- scripts/common.sh@364 -- # decimal 3 00:17:33.705 00:51:46 -- scripts/common.sh@352 -- # local d=3 00:17:33.705 00:51:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:33.705 00:51:46 -- scripts/common.sh@354 -- # echo 3 00:17:33.705 00:51:46 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:33.705 00:51:46 -- scripts/common.sh@365 -- # decimal 3 00:17:33.705 00:51:46 -- scripts/common.sh@352 -- # local d=3 00:17:33.705 00:51:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:33.705 00:51:46 -- scripts/common.sh@354 -- # echo 3 00:17:33.705 00:51:46 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:33.705 00:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:33.705 00:51:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:33.705 00:51:46 -- scripts/common.sh@363 -- # (( v++ )) 00:17:33.705 00:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.705 00:51:46 -- scripts/common.sh@364 -- # decimal 1 00:17:33.705 00:51:46 -- scripts/common.sh@352 -- # local d=1 00:17:33.705 00:51:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.705 00:51:46 -- scripts/common.sh@354 -- # echo 1 00:17:33.705 00:51:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:33.705 00:51:46 -- scripts/common.sh@365 -- # decimal 0 00:17:33.705 00:51:46 -- scripts/common.sh@352 -- # local d=0 00:17:33.705 00:51:46 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:33.705 00:51:46 -- scripts/common.sh@354 -- # echo 0 00:17:33.705 00:51:46 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:33.705 00:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:33.705 00:51:46 -- scripts/common.sh@366 -- # return 0 00:17:33.705 00:51:46 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:33.705 00:51:46 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:33.705 00:51:46 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:33.964 00:51:46 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:33.964 00:51:46 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:33.964 00:51:46 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:33.964 00:51:46 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:33.964 00:51:46 -- fips/fips.sh@113 -- # build_openssl_config 00:17:33.964 00:51:46 -- fips/fips.sh@37 -- # cat 00:17:33.964 00:51:46 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:33.964 00:51:46 -- fips/fips.sh@58 -- # cat - 00:17:33.964 00:51:46 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:33.964 00:51:46 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:33.964 00:51:46 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:33.964 00:51:46 -- fips/fips.sh@116 -- # openssl list -providers 00:17:33.964 00:51:46 -- fips/fips.sh@116 -- # grep name 00:17:33.964 00:51:46 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:33.964 00:51:46 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:33.964 00:51:46 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:33.964 00:51:46 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:33.964 00:51:46 -- common/autotest_common.sh@650 -- # local es=0 00:17:33.964 00:51:46 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:33.964 00:51:46 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:33.964 00:51:46 -- fips/fips.sh@127 -- # : 00:17:33.964 00:51:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 00:51:46 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:33.964 00:51:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 00:51:46 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:33.964 00:51:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 00:51:46 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:33.964 00:51:46 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:33.965 00:51:46 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:33.965 Error setting digest 00:17:33.965 40D2CE31387F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:33.965 40D2CE31387F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:33.965 00:51:46 -- common/autotest_common.sh@653 -- # es=1 00:17:33.965 00:51:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.965 00:51:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.965 00:51:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.965 00:51:46 -- fips/fips.sh@130 -- # nvmftestinit 00:17:33.965 00:51:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:33.965 00:51:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.965 00:51:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:33.965 00:51:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:33.965 00:51:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:33.965 00:51:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.965 00:51:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.965 00:51:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.965 00:51:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:33.965 00:51:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:33.965 00:51:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:33.965 00:51:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:33.965 00:51:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:33.965 00:51:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:33.965 00:51:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.965 00:51:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.965 00:51:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:33.965 00:51:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:33.965 00:51:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:33.965 00:51:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:33.965 00:51:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:33.965 00:51:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.965 00:51:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:33.965 00:51:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:33.965 00:51:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:33.965 00:51:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:33.965 00:51:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:33.965 00:51:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:33.965 Cannot find device "nvmf_tgt_br" 00:17:33.965 00:51:46 -- nvmf/common.sh@154 -- # true 00:17:33.965 00:51:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.965 Cannot find device "nvmf_tgt_br2" 00:17:33.965 00:51:46 -- nvmf/common.sh@155 -- # true 00:17:33.965 00:51:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:33.965 00:51:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:33.965 Cannot find device "nvmf_tgt_br" 00:17:33.965 00:51:46 -- nvmf/common.sh@157 -- # true 00:17:33.965 00:51:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:33.965 Cannot find device "nvmf_tgt_br2" 00:17:33.965 00:51:46 -- nvmf/common.sh@158 -- # true 00:17:33.965 00:51:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:33.965 00:51:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:33.965 00:51:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:33.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.965 00:51:46 -- nvmf/common.sh@161 -- # true 00:17:33.965 00:51:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:33.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.965 00:51:46 -- nvmf/common.sh@162 -- # true 00:17:33.965 00:51:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:33.965 00:51:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:33.965 00:51:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:33.965 00:51:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.224 00:51:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.224 00:51:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.224 00:51:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.224 00:51:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.224 00:51:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.224 00:51:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:34.224 00:51:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:34.224 00:51:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:34.224 00:51:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:34.224 00:51:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.224 00:51:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.224 00:51:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.224 00:51:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:34.224 00:51:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:34.224 00:51:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.224 00:51:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.224 00:51:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.224 00:51:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.224 00:51:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.224 00:51:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:34.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:17:34.224 00:17:34.224 --- 10.0.0.2 ping statistics --- 00:17:34.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.224 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:34.224 00:51:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:34.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:34.224 00:17:34.224 --- 10.0.0.3 ping statistics --- 00:17:34.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.224 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:34.224 00:51:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:34.224 00:17:34.224 --- 10.0.0.1 ping statistics --- 00:17:34.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.224 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:34.224 00:51:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.224 00:51:46 -- nvmf/common.sh@421 -- # return 0 00:17:34.224 00:51:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:34.224 00:51:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.224 00:51:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:34.224 00:51:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:34.224 00:51:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.224 00:51:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:34.224 00:51:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:34.224 00:51:46 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:34.224 00:51:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.224 00:51:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.224 00:51:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.224 00:51:46 -- nvmf/common.sh@469 -- # nvmfpid=90123 00:17:34.224 00:51:46 -- nvmf/common.sh@470 -- # waitforlisten 90123 00:17:34.224 00:51:46 -- common/autotest_common.sh@829 -- # '[' -z 90123 ']' 00:17:34.224 00:51:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.224 00:51:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.224 00:51:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.224 00:51:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.224 00:51:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.224 00:51:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.483 [2024-12-03 00:51:46.770067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:34.483 [2024-12-03 00:51:46.770167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.483 [2024-12-03 00:51:46.908448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.483 [2024-12-03 00:51:46.975673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.483 [2024-12-03 00:51:46.975817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.483 [2024-12-03 00:51:46.975829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.483 [2024-12-03 00:51:46.975838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.483 [2024-12-03 00:51:46.975870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.420 00:51:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.420 00:51:47 -- common/autotest_common.sh@862 -- # return 0 00:17:35.420 00:51:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:35.420 00:51:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.420 00:51:47 -- common/autotest_common.sh@10 -- # set +x 00:17:35.420 00:51:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.420 00:51:47 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:35.420 00:51:47 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.420 00:51:47 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.420 00:51:47 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.420 00:51:47 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.420 00:51:47 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.420 00:51:47 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.420 00:51:47 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.680 [2024-12-03 00:51:48.037152] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.680 [2024-12-03 00:51:48.053122] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.680 [2024-12-03 00:51:48.053339] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.680 malloc0 00:17:35.680 00:51:48 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.680 00:51:48 -- fips/fips.sh@147 -- # bdevperf_pid=90175 00:17:35.680 00:51:48 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.680 00:51:48 -- fips/fips.sh@148 -- # waitforlisten 90175 /var/tmp/bdevperf.sock 00:17:35.680 00:51:48 -- common/autotest_common.sh@829 -- # '[' -z 90175 ']' 00:17:35.680 00:51:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.680 00:51:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.680 00:51:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.680 00:51:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.680 00:51:48 -- common/autotest_common.sh@10 -- # set +x 00:17:35.680 [2024-12-03 00:51:48.186964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:35.680 [2024-12-03 00:51:48.187057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90175 ] 00:17:35.939 [2024-12-03 00:51:48.329715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.940 [2024-12-03 00:51:48.395735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.877 00:51:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.877 00:51:49 -- common/autotest_common.sh@862 -- # return 0 00:17:36.877 00:51:49 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:36.877 [2024-12-03 00:51:49.320729] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.877 TLSTESTn1 00:17:37.137 00:51:49 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.137 Running I/O for 10 seconds... 00:17:47.112 00:17:47.112 Latency(us) 00:17:47.112 [2024-12-03T00:51:59.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.112 [2024-12-03T00:51:59.627Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.112 Verification LBA range: start 0x0 length 0x2000 00:17:47.112 TLSTESTn1 : 10.01 6348.37 24.80 0.00 0.00 20132.32 4825.83 24784.52 00:17:47.112 [2024-12-03T00:51:59.627Z] =================================================================================================================== 00:17:47.112 [2024-12-03T00:51:59.627Z] Total : 6348.37 24.80 0.00 0.00 20132.32 4825.83 24784.52 00:17:47.112 0 00:17:47.112 00:51:59 -- fips/fips.sh@1 -- # cleanup 00:17:47.112 00:51:59 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:47.112 00:51:59 -- common/autotest_common.sh@806 -- # type=--id 00:17:47.112 00:51:59 -- common/autotest_common.sh@807 -- # id=0 00:17:47.112 00:51:59 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:47.112 00:51:59 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.112 00:51:59 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:47.112 00:51:59 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:47.112 00:51:59 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:47.112 00:51:59 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.112 nvmf_trace.0 00:17:47.112 00:51:59 -- common/autotest_common.sh@821 -- # return 0 00:17:47.112 00:51:59 -- fips/fips.sh@16 -- # killprocess 90175 00:17:47.112 00:51:59 -- common/autotest_common.sh@936 -- # '[' -z 90175 ']' 00:17:47.112 00:51:59 -- common/autotest_common.sh@940 -- # kill -0 90175 00:17:47.112 00:51:59 -- common/autotest_common.sh@941 -- # uname 00:17:47.112 00:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.112 00:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90175 00:17:47.112 killing process with pid 90175 00:17:47.112 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.112 00:17:47.112 Latency(us) 00:17:47.112 [2024-12-03T00:51:59.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.112 [2024-12-03T00:51:59.627Z] =================================================================================================================== 00:17:47.112 [2024-12-03T00:51:59.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.112 00:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:47.112 00:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:47.112 00:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90175' 00:17:47.112 00:51:59 -- common/autotest_common.sh@955 -- # kill 90175 00:17:47.112 00:51:59 -- common/autotest_common.sh@960 -- # wait 90175 00:17:47.371 00:51:59 -- fips/fips.sh@17 -- # nvmftestfini 00:17:47.371 00:51:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:47.371 00:51:59 -- nvmf/common.sh@116 -- # sync 00:17:47.371 00:51:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:47.371 00:51:59 -- nvmf/common.sh@119 -- # set +e 00:17:47.371 00:51:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:47.371 00:51:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:47.371 rmmod nvme_tcp 00:17:47.371 rmmod nvme_fabrics 00:17:47.630 rmmod nvme_keyring 00:17:47.630 00:51:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:47.630 00:51:59 -- nvmf/common.sh@123 -- # set -e 00:17:47.630 00:51:59 -- nvmf/common.sh@124 -- # return 0 00:17:47.630 00:51:59 -- nvmf/common.sh@477 -- # '[' -n 90123 ']' 00:17:47.630 00:51:59 -- nvmf/common.sh@478 -- # killprocess 90123 00:17:47.630 00:51:59 -- common/autotest_common.sh@936 -- # '[' -z 90123 ']' 00:17:47.630 00:51:59 -- common/autotest_common.sh@940 -- # kill -0 90123 00:17:47.630 00:51:59 -- common/autotest_common.sh@941 -- # uname 00:17:47.630 00:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.630 00:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90123 00:17:47.630 killing process with pid 90123 00:17:47.630 00:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:47.630 00:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:47.630 00:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90123' 00:17:47.630 00:51:59 -- common/autotest_common.sh@955 -- # kill 90123 00:17:47.630 00:51:59 -- common/autotest_common.sh@960 -- # wait 90123 00:17:47.889 00:52:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:47.889 00:52:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:47.889 00:52:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:47.889 00:52:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.889 00:52:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:47.889 00:52:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.889 00:52:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.889 00:52:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.889 00:52:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:47.889 00:52:00 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:47.889 00:17:47.889 real 0m14.295s 00:17:47.889 user 0m18.083s 00:17:47.889 sys 0m6.528s 00:17:47.889 00:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:47.889 ************************************ 00:17:47.889 END TEST nvmf_fips 00:17:47.889 ************************************ 00:17:47.889 00:52:00 -- common/autotest_common.sh@10 -- # set +x 00:17:47.889 00:52:00 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:47.889 00:52:00 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:47.889 00:52:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:47.889 00:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.889 00:52:00 -- common/autotest_common.sh@10 -- # set +x 00:17:47.889 ************************************ 00:17:47.889 START TEST nvmf_fuzz 00:17:47.889 ************************************ 00:17:47.889 00:52:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:47.889 * Looking for test storage... 00:17:47.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:47.889 00:52:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:47.889 00:52:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:47.889 00:52:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:48.148 00:52:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:48.148 00:52:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:48.148 00:52:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:48.148 00:52:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:48.148 00:52:00 -- scripts/common.sh@335 -- # IFS=.-: 00:17:48.148 00:52:00 -- scripts/common.sh@335 -- # read -ra ver1 00:17:48.148 00:52:00 -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.148 00:52:00 -- scripts/common.sh@336 -- # read -ra ver2 00:17:48.148 00:52:00 -- scripts/common.sh@337 -- # local 'op=<' 00:17:48.148 00:52:00 -- scripts/common.sh@339 -- # ver1_l=2 00:17:48.148 00:52:00 -- scripts/common.sh@340 -- # ver2_l=1 00:17:48.148 00:52:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:48.148 00:52:00 -- scripts/common.sh@343 -- # case "$op" in 00:17:48.148 00:52:00 -- scripts/common.sh@344 -- # : 1 00:17:48.148 00:52:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:48.148 00:52:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.148 00:52:00 -- scripts/common.sh@364 -- # decimal 1 00:17:48.148 00:52:00 -- scripts/common.sh@352 -- # local d=1 00:17:48.148 00:52:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.148 00:52:00 -- scripts/common.sh@354 -- # echo 1 00:17:48.148 00:52:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:48.148 00:52:00 -- scripts/common.sh@365 -- # decimal 2 00:17:48.148 00:52:00 -- scripts/common.sh@352 -- # local d=2 00:17:48.148 00:52:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.148 00:52:00 -- scripts/common.sh@354 -- # echo 2 00:17:48.148 00:52:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:48.148 00:52:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:48.148 00:52:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:48.148 00:52:00 -- scripts/common.sh@367 -- # return 0 00:17:48.148 00:52:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.148 00:52:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.149 --rc genhtml_branch_coverage=1 00:17:48.149 --rc genhtml_function_coverage=1 00:17:48.149 --rc genhtml_legend=1 00:17:48.149 --rc geninfo_all_blocks=1 00:17:48.149 --rc geninfo_unexecuted_blocks=1 00:17:48.149 00:17:48.149 ' 00:17:48.149 00:52:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.149 --rc genhtml_branch_coverage=1 00:17:48.149 --rc genhtml_function_coverage=1 00:17:48.149 --rc genhtml_legend=1 00:17:48.149 --rc geninfo_all_blocks=1 00:17:48.149 --rc geninfo_unexecuted_blocks=1 00:17:48.149 00:17:48.149 ' 00:17:48.149 00:52:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.149 --rc genhtml_branch_coverage=1 00:17:48.149 --rc genhtml_function_coverage=1 00:17:48.149 --rc genhtml_legend=1 00:17:48.149 --rc geninfo_all_blocks=1 00:17:48.149 --rc geninfo_unexecuted_blocks=1 00:17:48.149 00:17:48.149 ' 00:17:48.149 00:52:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.149 --rc genhtml_branch_coverage=1 00:17:48.149 --rc genhtml_function_coverage=1 00:17:48.149 --rc genhtml_legend=1 00:17:48.149 --rc geninfo_all_blocks=1 00:17:48.149 --rc geninfo_unexecuted_blocks=1 00:17:48.149 00:17:48.149 ' 00:17:48.149 00:52:00 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.149 00:52:00 -- nvmf/common.sh@7 -- # uname -s 00:17:48.149 00:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.149 00:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.149 00:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.149 00:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.149 00:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.149 00:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.149 00:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.149 00:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.149 00:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.149 00:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.149 00:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:17:48.149 00:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:17:48.149 00:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.149 00:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.149 00:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.149 00:52:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.149 00:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.149 00:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.149 00:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.149 00:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.149 00:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.149 00:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.149 00:52:00 -- paths/export.sh@5 -- # export PATH 00:17:48.149 00:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.149 00:52:00 -- nvmf/common.sh@46 -- # : 0 00:17:48.149 00:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:48.149 00:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:48.149 00:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:48.149 00:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.149 00:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.149 00:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:48.149 00:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:48.149 00:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:48.149 00:52:00 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:48.149 00:52:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:48.149 00:52:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.149 00:52:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:48.149 00:52:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:48.149 00:52:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:48.149 00:52:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.149 00:52:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.149 00:52:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.149 00:52:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:48.149 00:52:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:48.149 00:52:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:48.149 00:52:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:48.149 00:52:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:48.149 00:52:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:48.149 00:52:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.149 00:52:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.149 00:52:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:48.149 00:52:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:48.149 00:52:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.149 00:52:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.149 00:52:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.149 00:52:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.149 00:52:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.149 00:52:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.149 00:52:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.149 00:52:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.149 00:52:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:48.149 00:52:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:48.149 Cannot find device "nvmf_tgt_br" 00:17:48.149 00:52:00 -- nvmf/common.sh@154 -- # true 00:17:48.149 00:52:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.149 Cannot find device "nvmf_tgt_br2" 00:17:48.149 00:52:00 -- nvmf/common.sh@155 -- # true 00:17:48.149 00:52:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:48.149 00:52:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:48.149 Cannot find device "nvmf_tgt_br" 00:17:48.149 00:52:00 -- nvmf/common.sh@157 -- # true 00:17:48.149 00:52:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:48.149 Cannot find device "nvmf_tgt_br2" 00:17:48.149 00:52:00 -- nvmf/common.sh@158 -- # true 00:17:48.149 00:52:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:48.149 00:52:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:48.149 00:52:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.149 00:52:00 -- nvmf/common.sh@161 -- # true 00:17:48.149 00:52:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.149 00:52:00 -- nvmf/common.sh@162 -- # true 00:17:48.149 00:52:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:48.149 00:52:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:48.149 00:52:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.407 00:52:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.407 00:52:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.407 00:52:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:48.407 00:52:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:48.407 00:52:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:48.407 00:52:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:48.407 00:52:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:48.407 00:52:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:48.407 00:52:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:48.407 00:52:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:48.407 00:52:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.407 00:52:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.407 00:52:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.407 00:52:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:48.407 00:52:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:48.407 00:52:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:48.407 00:52:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:48.407 00:52:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:48.407 00:52:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:48.407 00:52:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:48.407 00:52:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:48.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:48.407 00:17:48.407 --- 10.0.0.2 ping statistics --- 00:17:48.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.407 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:48.407 00:52:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:48.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:48.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:48.407 00:17:48.407 --- 10.0.0.3 ping statistics --- 00:17:48.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.407 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:48.407 00:52:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:48.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:48.407 00:17:48.407 --- 10.0.0.1 ping statistics --- 00:17:48.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.407 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:48.407 00:52:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.407 00:52:00 -- nvmf/common.sh@421 -- # return 0 00:17:48.407 00:52:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:48.407 00:52:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.407 00:52:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:48.407 00:52:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:48.407 00:52:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.407 00:52:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:48.407 00:52:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:48.407 00:52:00 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90526 00:17:48.407 00:52:00 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:48.407 00:52:00 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90526 00:17:48.407 00:52:00 -- common/autotest_common.sh@829 -- # '[' -z 90526 ']' 00:17:48.407 00:52:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.407 00:52:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.407 00:52:00 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:48.407 00:52:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.407 00:52:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.407 00:52:00 -- common/autotest_common.sh@10 -- # set +x 00:17:49.782 00:52:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.782 00:52:01 -- common/autotest_common.sh@862 -- # return 0 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.783 00:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.783 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:17:49.783 00:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:49.783 00:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.783 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:17:49.783 Malloc0 00:17:49.783 00:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.783 00:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.783 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:17:49.783 00:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.783 00:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.783 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:17:49.783 00:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.783 00:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.783 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:17:49.783 00:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:49.783 00:52:01 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:49.783 Shutting down the fuzz application 00:17:49.783 00:52:02 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:50.351 Shutting down the fuzz application 00:17:50.351 00:52:02 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.351 00:52:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.351 00:52:02 -- common/autotest_common.sh@10 -- # set +x 00:17:50.352 00:52:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.352 00:52:02 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:50.352 00:52:02 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:50.352 00:52:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.352 00:52:02 -- nvmf/common.sh@116 -- # sync 00:17:50.352 00:52:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.352 00:52:02 -- nvmf/common.sh@119 -- # set +e 00:17:50.352 00:52:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.352 00:52:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.352 rmmod nvme_tcp 00:17:50.352 rmmod nvme_fabrics 00:17:50.352 rmmod nvme_keyring 00:17:50.352 00:52:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.352 00:52:02 -- nvmf/common.sh@123 -- # set -e 00:17:50.352 00:52:02 -- nvmf/common.sh@124 -- # return 0 00:17:50.352 00:52:02 -- nvmf/common.sh@477 -- # '[' -n 90526 ']' 00:17:50.352 00:52:02 -- nvmf/common.sh@478 -- # killprocess 90526 00:17:50.352 00:52:02 -- common/autotest_common.sh@936 -- # '[' -z 90526 ']' 00:17:50.352 00:52:02 -- common/autotest_common.sh@940 -- # kill -0 90526 00:17:50.352 00:52:02 -- common/autotest_common.sh@941 -- # uname 00:17:50.352 00:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.352 00:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90526 00:17:50.352 00:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.352 00:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.352 killing process with pid 90526 00:17:50.352 00:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90526' 00:17:50.352 00:52:02 -- common/autotest_common.sh@955 -- # kill 90526 00:17:50.352 00:52:02 -- common/autotest_common.sh@960 -- # wait 90526 00:17:50.611 00:52:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.611 00:52:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.611 00:52:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.611 00:52:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.611 00:52:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.611 00:52:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.611 00:52:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.611 00:52:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.611 00:52:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:50.611 00:52:02 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:50.611 00:17:50.611 real 0m2.699s 00:17:50.611 user 0m2.779s 00:17:50.611 sys 0m0.708s 00:17:50.611 00:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.611 ************************************ 00:17:50.611 END TEST nvmf_fuzz 00:17:50.611 00:52:02 -- common/autotest_common.sh@10 -- # set +x 00:17:50.611 ************************************ 00:17:50.611 00:52:03 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:50.611 00:52:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.611 00:52:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.611 00:52:03 -- common/autotest_common.sh@10 -- # set +x 00:17:50.611 ************************************ 00:17:50.611 START TEST nvmf_multiconnection 00:17:50.611 ************************************ 00:17:50.611 00:52:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:50.611 * Looking for test storage... 00:17:50.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:50.611 00:52:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.611 00:52:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.611 00:52:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.871 00:52:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.871 00:52:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.871 00:52:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.871 00:52:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.871 00:52:03 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.871 00:52:03 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.871 00:52:03 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.871 00:52:03 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.871 00:52:03 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.871 00:52:03 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.871 00:52:03 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.871 00:52:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.871 00:52:03 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.871 00:52:03 -- scripts/common.sh@344 -- # : 1 00:17:50.871 00:52:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.871 00:52:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.871 00:52:03 -- scripts/common.sh@364 -- # decimal 1 00:17:50.871 00:52:03 -- scripts/common.sh@352 -- # local d=1 00:17:50.871 00:52:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.871 00:52:03 -- scripts/common.sh@354 -- # echo 1 00:17:50.871 00:52:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.871 00:52:03 -- scripts/common.sh@365 -- # decimal 2 00:17:50.871 00:52:03 -- scripts/common.sh@352 -- # local d=2 00:17:50.871 00:52:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.871 00:52:03 -- scripts/common.sh@354 -- # echo 2 00:17:50.871 00:52:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.871 00:52:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.871 00:52:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.871 00:52:03 -- scripts/common.sh@367 -- # return 0 00:17:50.871 00:52:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.871 00:52:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.871 --rc genhtml_branch_coverage=1 00:17:50.871 --rc genhtml_function_coverage=1 00:17:50.871 --rc genhtml_legend=1 00:17:50.871 --rc geninfo_all_blocks=1 00:17:50.871 --rc geninfo_unexecuted_blocks=1 00:17:50.871 00:17:50.871 ' 00:17:50.871 00:52:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.871 --rc genhtml_branch_coverage=1 00:17:50.871 --rc genhtml_function_coverage=1 00:17:50.871 --rc genhtml_legend=1 00:17:50.871 --rc geninfo_all_blocks=1 00:17:50.871 --rc geninfo_unexecuted_blocks=1 00:17:50.871 00:17:50.871 ' 00:17:50.871 00:52:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.871 --rc genhtml_branch_coverage=1 00:17:50.871 --rc genhtml_function_coverage=1 00:17:50.871 --rc genhtml_legend=1 00:17:50.871 --rc geninfo_all_blocks=1 00:17:50.871 --rc geninfo_unexecuted_blocks=1 00:17:50.871 00:17:50.871 ' 00:17:50.871 00:52:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.871 --rc genhtml_branch_coverage=1 00:17:50.871 --rc genhtml_function_coverage=1 00:17:50.871 --rc genhtml_legend=1 00:17:50.871 --rc geninfo_all_blocks=1 00:17:50.871 --rc geninfo_unexecuted_blocks=1 00:17:50.871 00:17:50.871 ' 00:17:50.871 00:52:03 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.871 00:52:03 -- nvmf/common.sh@7 -- # uname -s 00:17:50.871 00:52:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.871 00:52:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.872 00:52:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.872 00:52:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.872 00:52:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.872 00:52:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.872 00:52:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.872 00:52:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.872 00:52:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.872 00:52:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.872 00:52:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:17:50.872 00:52:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:17:50.872 00:52:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.872 00:52:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.872 00:52:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.872 00:52:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.872 00:52:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.872 00:52:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.872 00:52:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.872 00:52:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.872 00:52:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.872 00:52:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.872 00:52:03 -- paths/export.sh@5 -- # export PATH 00:17:50.872 00:52:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.872 00:52:03 -- nvmf/common.sh@46 -- # : 0 00:17:50.872 00:52:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.872 00:52:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.872 00:52:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.872 00:52:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.872 00:52:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.872 00:52:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.872 00:52:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.872 00:52:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.872 00:52:03 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.872 00:52:03 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.872 00:52:03 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:50.872 00:52:03 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:50.872 00:52:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.872 00:52:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.872 00:52:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.872 00:52:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.872 00:52:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.872 00:52:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.872 00:52:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.872 00:52:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.872 00:52:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:50.872 00:52:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:50.872 00:52:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:50.872 00:52:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:50.872 00:52:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:50.872 00:52:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:50.872 00:52:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.872 00:52:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.872 00:52:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.872 00:52:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:50.872 00:52:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.872 00:52:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.872 00:52:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.872 00:52:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.872 00:52:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.872 00:52:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.872 00:52:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.872 00:52:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.872 00:52:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:50.872 00:52:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:50.872 Cannot find device "nvmf_tgt_br" 00:17:50.872 00:52:03 -- nvmf/common.sh@154 -- # true 00:17:50.872 00:52:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.872 Cannot find device "nvmf_tgt_br2" 00:17:50.872 00:52:03 -- nvmf/common.sh@155 -- # true 00:17:50.872 00:52:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:50.872 00:52:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:50.872 Cannot find device "nvmf_tgt_br" 00:17:50.872 00:52:03 -- nvmf/common.sh@157 -- # true 00:17:50.872 00:52:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:50.872 Cannot find device "nvmf_tgt_br2" 00:17:50.872 00:52:03 -- nvmf/common.sh@158 -- # true 00:17:50.872 00:52:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:50.872 00:52:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:51.131 00:52:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.131 00:52:03 -- nvmf/common.sh@161 -- # true 00:17:51.131 00:52:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.131 00:52:03 -- nvmf/common.sh@162 -- # true 00:17:51.131 00:52:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.131 00:52:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.131 00:52:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.131 00:52:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.131 00:52:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.131 00:52:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.131 00:52:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.131 00:52:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.131 00:52:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.131 00:52:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:51.131 00:52:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:51.131 00:52:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:51.131 00:52:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:51.131 00:52:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.131 00:52:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.131 00:52:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.131 00:52:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:51.131 00:52:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:51.131 00:52:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.131 00:52:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.131 00:52:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.131 00:52:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.131 00:52:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.131 00:52:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:51.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:51.131 00:17:51.131 --- 10.0.0.2 ping statistics --- 00:17:51.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.131 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:51.131 00:52:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:51.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:51.131 00:17:51.131 --- 10.0.0.3 ping statistics --- 00:17:51.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.131 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:51.131 00:52:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:51.390 00:17:51.390 --- 10.0.0.1 ping statistics --- 00:17:51.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.390 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:51.390 00:52:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.390 00:52:03 -- nvmf/common.sh@421 -- # return 0 00:17:51.390 00:52:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.390 00:52:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.390 00:52:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.390 00:52:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.390 00:52:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.390 00:52:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.390 00:52:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.390 00:52:03 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:51.390 00:52:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.390 00:52:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.390 00:52:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.390 00:52:03 -- nvmf/common.sh@469 -- # nvmfpid=90746 00:17:51.390 00:52:03 -- nvmf/common.sh@470 -- # waitforlisten 90746 00:17:51.390 00:52:03 -- common/autotest_common.sh@829 -- # '[' -z 90746 ']' 00:17:51.390 00:52:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.391 00:52:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.391 00:52:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.391 00:52:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.391 00:52:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.391 00:52:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.391 [2024-12-03 00:52:03.737037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.391 [2024-12-03 00:52:03.737127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.391 [2024-12-03 00:52:03.877713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.650 [2024-12-03 00:52:03.940278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.650 [2024-12-03 00:52:03.940438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.650 [2024-12-03 00:52:03.940452] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.650 [2024-12-03 00:52:03.940460] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.650 [2024-12-03 00:52:03.940952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.650 [2024-12-03 00:52:03.941057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.650 [2024-12-03 00:52:03.941674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.650 [2024-12-03 00:52:03.941692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.586 00:52:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.586 00:52:04 -- common/autotest_common.sh@862 -- # return 0 00:17:52.586 00:52:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.586 00:52:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.586 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.586 00:52:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.586 00:52:04 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.586 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.586 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.586 [2024-12-03 00:52:04.814751] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.586 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.586 00:52:04 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:52.586 00:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.586 00:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:52.586 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.586 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.586 Malloc1 00:17:52.586 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.586 00:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:52.586 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.586 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.586 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.586 00:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:52.586 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.586 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 [2024-12-03 00:52:04.886620] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.587 00:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 Malloc2 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.587 00:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 Malloc3 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.587 00:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:52.587 00:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 Malloc4 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.587 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 Malloc5 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.587 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:52.587 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 Malloc6 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.846 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 Malloc7 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.846 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 Malloc8 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.846 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 Malloc9 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.846 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 Malloc10 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.846 00:52:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 Malloc11 00:17:52.846 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.846 00:52:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:52.846 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.846 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.104 00:52:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:53.104 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.104 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.104 00:52:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:53.104 00:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.104 00:52:05 -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 00:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.104 00:52:05 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:53.104 00:52:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.104 00:52:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:53.104 00:52:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:53.104 00:52:05 -- common/autotest_common.sh@1187 -- # local i=0 00:17:53.104 00:52:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.104 00:52:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:53.104 00:52:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:55.713 00:52:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:55.713 00:52:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:55.713 00:52:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:17:55.713 00:52:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:55.713 00:52:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.713 00:52:07 -- common/autotest_common.sh@1197 -- # return 0 00:17:55.713 00:52:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.713 00:52:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:55.713 00:52:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:55.713 00:52:07 -- common/autotest_common.sh@1187 -- # local i=0 00:17:55.713 00:52:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.713 00:52:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:55.713 00:52:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:57.663 00:52:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:57.663 00:52:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:57.663 00:52:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:17:57.663 00:52:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:57.663 00:52:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.663 00:52:09 -- common/autotest_common.sh@1197 -- # return 0 00:17:57.663 00:52:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:57.664 00:52:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:57.664 00:52:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:57.664 00:52:09 -- common/autotest_common.sh@1187 -- # local i=0 00:17:57.664 00:52:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.664 00:52:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:57.664 00:52:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:59.565 00:52:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:59.565 00:52:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:17:59.565 00:52:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:59.565 00:52:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:59.565 00:52:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.565 00:52:11 -- common/autotest_common.sh@1197 -- # return 0 00:17:59.565 00:52:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.565 00:52:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:59.824 00:52:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:59.824 00:52:12 -- common/autotest_common.sh@1187 -- # local i=0 00:17:59.824 00:52:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.824 00:52:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:59.824 00:52:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:01.726 00:52:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:01.726 00:52:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:01.726 00:52:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:01.726 00:52:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:01.726 00:52:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.726 00:52:14 -- common/autotest_common.sh@1197 -- # return 0 00:18:01.726 00:52:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.726 00:52:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:01.985 00:52:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:01.985 00:52:14 -- common/autotest_common.sh@1187 -- # local i=0 00:18:01.985 00:52:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.985 00:52:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:01.985 00:52:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:03.886 00:52:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:03.886 00:52:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:03.886 00:52:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:03.886 00:52:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:03.886 00:52:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.886 00:52:16 -- common/autotest_common.sh@1197 -- # return 0 00:18:03.886 00:52:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.886 00:52:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:04.144 00:52:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:04.144 00:52:16 -- common/autotest_common.sh@1187 -- # local i=0 00:18:04.144 00:52:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.144 00:52:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:04.144 00:52:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:06.076 00:52:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:06.076 00:52:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:06.076 00:52:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:06.076 00:52:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:06.076 00:52:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.076 00:52:18 -- common/autotest_common.sh@1197 -- # return 0 00:18:06.076 00:52:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:06.076 00:52:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:06.335 00:52:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:06.335 00:52:18 -- common/autotest_common.sh@1187 -- # local i=0 00:18:06.335 00:52:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.335 00:52:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:06.335 00:52:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:08.867 00:52:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:08.867 00:52:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:08.867 00:52:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:08.867 00:52:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:08.867 00:52:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.867 00:52:20 -- common/autotest_common.sh@1197 -- # return 0 00:18:08.867 00:52:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.867 00:52:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:08.867 00:52:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:08.867 00:52:20 -- common/autotest_common.sh@1187 -- # local i=0 00:18:08.867 00:52:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.867 00:52:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:08.867 00:52:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:10.771 00:52:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:10.771 00:52:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:10.771 00:52:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:10.771 00:52:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:10.771 00:52:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.771 00:52:22 -- common/autotest_common.sh@1197 -- # return 0 00:18:10.771 00:52:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.772 00:52:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:10.772 00:52:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:10.772 00:52:23 -- common/autotest_common.sh@1187 -- # local i=0 00:18:10.772 00:52:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.772 00:52:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:10.772 00:52:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:12.673 00:52:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:12.673 00:52:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:12.673 00:52:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:12.932 00:52:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:12.932 00:52:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.932 00:52:25 -- common/autotest_common.sh@1197 -- # return 0 00:18:12.932 00:52:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.932 00:52:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:12.932 00:52:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:12.932 00:52:25 -- common/autotest_common.sh@1187 -- # local i=0 00:18:12.932 00:52:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.932 00:52:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:12.932 00:52:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:15.462 00:52:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:15.462 00:52:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:15.462 00:52:27 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:15.462 00:52:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:15.462 00:52:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.462 00:52:27 -- common/autotest_common.sh@1197 -- # return 0 00:18:15.462 00:52:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:15.462 00:52:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:15.462 00:52:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:15.462 00:52:27 -- common/autotest_common.sh@1187 -- # local i=0 00:18:15.462 00:52:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.462 00:52:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:15.462 00:52:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:17.365 00:52:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:17.365 00:52:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:17.365 00:52:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:17.365 00:52:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:17.365 00:52:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.365 00:52:29 -- common/autotest_common.sh@1197 -- # return 0 00:18:17.365 00:52:29 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:17.365 [global] 00:18:17.365 thread=1 00:18:17.365 invalidate=1 00:18:17.365 rw=read 00:18:17.365 time_based=1 00:18:17.365 runtime=10 00:18:17.365 ioengine=libaio 00:18:17.365 direct=1 00:18:17.365 bs=262144 00:18:17.365 iodepth=64 00:18:17.365 norandommap=1 00:18:17.365 numjobs=1 00:18:17.365 00:18:17.365 [job0] 00:18:17.365 filename=/dev/nvme0n1 00:18:17.365 [job1] 00:18:17.365 filename=/dev/nvme10n1 00:18:17.365 [job2] 00:18:17.365 filename=/dev/nvme1n1 00:18:17.365 [job3] 00:18:17.365 filename=/dev/nvme2n1 00:18:17.365 [job4] 00:18:17.365 filename=/dev/nvme3n1 00:18:17.365 [job5] 00:18:17.365 filename=/dev/nvme4n1 00:18:17.365 [job6] 00:18:17.365 filename=/dev/nvme5n1 00:18:17.365 [job7] 00:18:17.365 filename=/dev/nvme6n1 00:18:17.365 [job8] 00:18:17.365 filename=/dev/nvme7n1 00:18:17.365 [job9] 00:18:17.365 filename=/dev/nvme8n1 00:18:17.365 [job10] 00:18:17.365 filename=/dev/nvme9n1 00:18:17.365 Could not set queue depth (nvme0n1) 00:18:17.365 Could not set queue depth (nvme10n1) 00:18:17.365 Could not set queue depth (nvme1n1) 00:18:17.365 Could not set queue depth (nvme2n1) 00:18:17.365 Could not set queue depth (nvme3n1) 00:18:17.365 Could not set queue depth (nvme4n1) 00:18:17.365 Could not set queue depth (nvme5n1) 00:18:17.365 Could not set queue depth (nvme6n1) 00:18:17.365 Could not set queue depth (nvme7n1) 00:18:17.365 Could not set queue depth (nvme8n1) 00:18:17.365 Could not set queue depth (nvme9n1) 00:18:17.624 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.624 fio-3.35 00:18:17.624 Starting 11 threads 00:18:29.831 00:18:29.831 job0: (groupid=0, jobs=1): err= 0: pid=91219: Tue Dec 3 00:52:40 2024 00:18:29.831 read: IOPS=366, BW=91.5MiB/s (95.9MB/s)(923MiB/10090msec) 00:18:29.831 slat (usec): min=21, max=112496, avg=2669.71, stdev=11364.55 00:18:29.831 clat (msec): min=48, max=327, avg=171.92, stdev=27.57 00:18:29.831 lat (msec): min=48, max=327, avg=174.59, stdev=29.79 00:18:29.831 clat percentiles (msec): 00:18:29.831 | 1.00th=[ 112], 5.00th=[ 130], 10.00th=[ 140], 20.00th=[ 150], 00:18:29.831 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 182], 00:18:29.831 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 203], 95.00th=[ 209], 00:18:29.831 | 99.00th=[ 249], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:18:29.831 | 99.99th=[ 330] 00:18:29.831 bw ( KiB/s): min=73728, max=114176, per=5.99%, avg=92883.35, stdev=11191.68, samples=20 00:18:29.832 iops : min= 288, max= 446, avg=362.75, stdev=43.75, samples=20 00:18:29.832 lat (msec) : 50=0.03%, 100=0.43%, 250=98.67%, 500=0.87% 00:18:29.832 cpu : usr=0.13%, sys=1.22%, ctx=790, majf=0, minf=4097 00:18:29.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.832 issued rwts: total=3693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.832 job1: (groupid=0, jobs=1): err= 0: pid=91220: Tue Dec 3 00:52:40 2024 00:18:29.832 read: IOPS=859, BW=215MiB/s (225MB/s)(2156MiB/10028msec) 00:18:29.832 slat (usec): min=15, max=79917, avg=1098.02, stdev=4393.86 00:18:29.832 clat (msec): min=2, max=231, avg=73.19, stdev=31.08 00:18:29.832 lat (msec): min=2, max=250, avg=74.29, stdev=31.63 00:18:29.832 clat percentiles (msec): 00:18:29.832 | 1.00th=[ 18], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 56], 00:18:29.832 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 69], 00:18:29.832 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 127], 95.00th=[ 146], 00:18:29.832 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 205], 99.95th=[ 207], 00:18:29.832 | 99.99th=[ 232] 00:18:29.832 bw ( KiB/s): min=94720, max=318976, per=14.13%, avg=219030.35, stdev=63778.39, samples=20 00:18:29.832 iops : min= 370, max= 1246, avg=855.50, stdev=249.24, samples=20 00:18:29.832 lat (msec) : 4=0.30%, 10=0.12%, 20=0.86%, 50=10.00%, 100=76.79% 00:18:29.832 lat (msec) : 250=11.93% 00:18:29.832 cpu : usr=0.27%, sys=2.63%, ctx=1572, majf=0, minf=4097 00:18:29.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.832 issued rwts: total=8623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.832 job2: (groupid=0, jobs=1): err= 0: pid=91221: Tue Dec 3 00:52:40 2024 00:18:29.832 read: IOPS=701, BW=175MiB/s (184MB/s)(1771MiB/10099msec) 00:18:29.832 slat (usec): min=14, max=105821, avg=1387.76, stdev=5343.03 00:18:29.832 clat (usec): min=1388, max=222386, avg=89699.01, stdev=35579.00 00:18:29.832 lat (usec): min=1458, max=228825, avg=91086.77, stdev=36316.30 00:18:29.832 clat percentiles (msec): 00:18:29.832 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 47], 00:18:29.832 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 102], 00:18:29.832 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 130], 95.00th=[ 138], 00:18:29.832 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 213], 99.95th=[ 222], 00:18:29.832 | 99.99th=[ 224] 00:18:29.832 bw ( KiB/s): min=120832, max=422578, per=11.59%, avg=179647.40, stdev=76349.27, samples=20 00:18:29.832 iops : min= 472, max= 1650, avg=701.60, stdev=298.03, samples=20 00:18:29.832 lat (msec) : 2=0.01%, 4=0.45%, 10=0.35%, 20=0.37%, 50=20.23% 00:18:29.832 lat (msec) : 100=36.99%, 250=41.59% 00:18:29.832 cpu : usr=0.24%, sys=2.46%, ctx=1261, majf=0, minf=4097 00:18:29.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.832 issued rwts: total=7083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.832 job3: (groupid=0, jobs=1): err= 0: pid=91222: Tue Dec 3 00:52:40 2024 00:18:29.832 read: IOPS=365, BW=91.3MiB/s (95.7MB/s)(922MiB/10101msec) 00:18:29.832 slat (usec): min=20, max=131624, avg=2706.71, stdev=10420.93 00:18:29.832 clat (msec): min=38, max=324, avg=172.10, stdev=30.08 00:18:29.832 lat (msec): min=40, max=330, avg=174.81, stdev=31.93 00:18:29.832 clat percentiles (msec): 00:18:29.832 | 1.00th=[ 59], 5.00th=[ 131], 10.00th=[ 140], 20.00th=[ 150], 00:18:29.832 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 184], 00:18:29.832 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 209], 00:18:29.832 | 99.00th=[ 232], 99.50th=[ 253], 99.90th=[ 296], 99.95th=[ 321], 00:18:29.832 | 99.99th=[ 326] 00:18:29.832 bw ( KiB/s): min=72849, max=119568, per=5.99%, avg=92786.25, stdev=12288.69, samples=20 00:18:29.832 iops : min= 284, max= 467, avg=362.35, stdev=48.08, samples=20 00:18:29.832 lat (msec) : 50=0.76%, 100=1.49%, 250=97.21%, 500=0.54% 00:18:29.832 cpu : usr=0.13%, sys=1.42%, ctx=781, majf=0, minf=4097 00:18:29.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.832 issued rwts: total=3689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.832 job4: (groupid=0, jobs=1): err= 0: pid=91223: Tue Dec 3 00:52:40 2024 00:18:29.832 read: IOPS=720, BW=180MiB/s (189MB/s)(1818MiB/10100msec) 00:18:29.832 slat (usec): min=20, max=97300, avg=1305.69, stdev=5602.02 00:18:29.832 clat (msec): min=5, max=264, avg=87.44, stdev=38.70 00:18:29.832 lat (msec): min=5, max=281, avg=88.75, stdev=39.47 00:18:29.832 clat percentiles (msec): 00:18:29.832 | 1.00th=[ 18], 5.00th=[ 42], 10.00th=[ 53], 20.00th=[ 60], 00:18:29.832 | 30.00th=[ 65], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 80], 00:18:29.832 | 70.00th=[ 113], 80.00th=[ 125], 90.00th=[ 138], 95.00th=[ 163], 00:18:29.832 | 99.00th=[ 203], 99.50th=[ 218], 99.90th=[ 228], 99.95th=[ 239], 00:18:29.832 | 99.99th=[ 266] 00:18:29.832 bw ( KiB/s): min=89421, max=258560, per=11.90%, avg=184441.40, stdev=59578.65, samples=20 00:18:29.832 iops : min= 349, max= 1010, avg=720.35, stdev=232.78, samples=20 00:18:29.832 lat (msec) : 10=0.26%, 20=1.44%, 50=6.16%, 100=58.84%, 250=33.26% 00:18:29.832 lat (msec) : 500=0.03% 00:18:29.832 cpu : usr=0.33%, sys=2.44%, ctx=1306, majf=0, minf=4097 00:18:29.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.832 issued rwts: total=7272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.832 job5: (groupid=0, jobs=1): err= 0: pid=91224: Tue Dec 3 00:52:40 2024 00:18:29.832 read: IOPS=536, BW=134MiB/s (141MB/s)(1350MiB/10061msec) 00:18:29.832 slat (usec): min=19, max=91678, avg=1762.77, stdev=7174.28 00:18:29.832 clat (msec): min=21, max=290, avg=117.27, stdev=61.79 00:18:29.832 lat (msec): min=21, max=296, avg=119.03, stdev=63.05 00:18:29.832 clat percentiles (msec): 00:18:29.832 | 1.00th=[ 39], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 61], 00:18:29.832 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 146], 00:18:29.832 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 197], 95.00th=[ 207], 00:18:29.832 | 99.00th=[ 232], 99.50th=[ 241], 99.90th=[ 251], 99.95th=[ 255], 00:18:29.832 | 99.99th=[ 292] 00:18:29.832 bw ( KiB/s): min=75776, max=265216, per=8.82%, avg=136606.40, stdev=74857.79, samples=20 00:18:29.832 iops : min= 296, max= 1036, avg=533.50, stdev=292.50, samples=20 00:18:29.832 lat (msec) : 50=5.54%, 100=47.66%, 250=46.58%, 500=0.22% 00:18:29.832 cpu : usr=0.19%, sys=1.78%, ctx=1013, majf=0, minf=4097 00:18:29.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.832 issued rwts: total=5401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.832 job6: (groupid=0, jobs=1): err= 0: pid=91225: Tue Dec 3 00:52:40 2024 00:18:29.832 read: IOPS=563, BW=141MiB/s (148MB/s)(1423MiB/10100msec) 00:18:29.832 slat (usec): min=14, max=71859, avg=1691.24, stdev=6115.16 00:18:29.832 clat (msec): min=13, max=212, avg=111.60, stdev=27.93 00:18:29.833 lat (msec): min=13, max=240, avg=113.29, stdev=28.73 00:18:29.833 clat percentiles (msec): 00:18:29.833 | 1.00th=[ 38], 5.00th=[ 79], 10.00th=[ 84], 20.00th=[ 90], 00:18:29.833 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 115], 00:18:29.833 | 70.00th=[ 124], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 169], 00:18:29.833 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 211], 99.95th=[ 213], 00:18:29.833 | 99.99th=[ 213] 00:18:29.833 bw ( KiB/s): min=92487, max=197748, per=9.29%, avg=143972.65, stdev=27281.46, samples=20 00:18:29.833 iops : min= 361, max= 772, avg=562.30, stdev=106.51, samples=20 00:18:29.833 lat (msec) : 20=0.21%, 50=0.93%, 100=37.35%, 250=61.51% 00:18:29.833 cpu : usr=0.20%, sys=2.12%, ctx=1054, majf=0, minf=4097 00:18:29.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:29.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.833 issued rwts: total=5690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.833 job7: (groupid=0, jobs=1): err= 0: pid=91226: Tue Dec 3 00:52:40 2024 00:18:29.833 read: IOPS=465, BW=116MiB/s (122MB/s)(1174MiB/10091msec) 00:18:29.833 slat (usec): min=15, max=142472, avg=2072.89, stdev=9234.66 00:18:29.833 clat (msec): min=22, max=307, avg=135.11, stdev=66.66 00:18:29.833 lat (msec): min=22, max=339, avg=137.18, stdev=68.22 00:18:29.833 clat percentiles (msec): 00:18:29.833 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 42], 00:18:29.833 | 30.00th=[ 78], 40.00th=[ 150], 50.00th=[ 163], 60.00th=[ 176], 00:18:29.833 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 211], 00:18:29.833 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 266], 99.95th=[ 309], 00:18:29.833 | 99.99th=[ 309] 00:18:29.833 bw ( KiB/s): min=69120, max=445952, per=7.65%, avg=118584.80, stdev=86712.80, samples=20 00:18:29.833 iops : min= 270, max= 1742, avg=463.20, stdev=338.73, samples=20 00:18:29.833 lat (msec) : 50=25.29%, 100=6.75%, 250=67.77%, 500=0.19% 00:18:29.833 cpu : usr=0.18%, sys=1.55%, ctx=947, majf=0, minf=4097 00:18:29.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:29.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.833 issued rwts: total=4697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.833 job8: (groupid=0, jobs=1): err= 0: pid=91232: Tue Dec 3 00:52:40 2024 00:18:29.833 read: IOPS=767, BW=192MiB/s (201MB/s)(1929MiB/10054msec) 00:18:29.833 slat (usec): min=20, max=65136, avg=1254.92, stdev=5186.09 00:18:29.833 clat (msec): min=5, max=232, avg=82.00, stdev=47.21 00:18:29.833 lat (msec): min=5, max=244, avg=83.26, stdev=48.11 00:18:29.833 clat percentiles (msec): 00:18:29.833 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 26], 20.00th=[ 29], 00:18:29.833 | 30.00th=[ 37], 40.00th=[ 73], 50.00th=[ 92], 60.00th=[ 101], 00:18:29.833 | 70.00th=[ 111], 80.00th=[ 123], 90.00th=[ 138], 95.00th=[ 157], 00:18:29.833 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 224], 99.95th=[ 224], 00:18:29.833 | 99.99th=[ 232] 00:18:29.833 bw ( KiB/s): min=92344, max=521216, per=12.64%, avg=195891.85, stdev=128188.56, samples=20 00:18:29.833 iops : min= 360, max= 2036, avg=765.05, stdev=500.81, samples=20 00:18:29.833 lat (msec) : 10=0.65%, 20=3.42%, 50=33.64%, 100=22.67%, 250=39.62% 00:18:29.833 cpu : usr=0.24%, sys=2.46%, ctx=1214, majf=0, minf=4097 00:18:29.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:29.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.833 issued rwts: total=7715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.833 job9: (groupid=0, jobs=1): err= 0: pid=91238: Tue Dec 3 00:52:40 2024 00:18:29.833 read: IOPS=367, BW=92.0MiB/s (96.5MB/s)(929MiB/10101msec) 00:18:29.833 slat (usec): min=16, max=142437, avg=2597.48, stdev=9777.38 00:18:29.833 clat (msec): min=29, max=312, avg=170.91, stdev=38.96 00:18:29.833 lat (msec): min=29, max=313, avg=173.51, stdev=40.48 00:18:29.833 clat percentiles (msec): 00:18:29.833 | 1.00th=[ 44], 5.00th=[ 75], 10.00th=[ 134], 20.00th=[ 148], 00:18:29.833 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 186], 00:18:29.833 | 70.00th=[ 190], 80.00th=[ 199], 90.00th=[ 209], 95.00th=[ 220], 00:18:29.833 | 99.00th=[ 243], 99.50th=[ 271], 99.90th=[ 292], 99.95th=[ 292], 00:18:29.833 | 99.99th=[ 313] 00:18:29.833 bw ( KiB/s): min=70144, max=140800, per=6.03%, avg=93503.85, stdev=17551.76, samples=20 00:18:29.833 iops : min= 274, max= 550, avg=365.10, stdev=68.50, samples=20 00:18:29.833 lat (msec) : 50=1.51%, 100=4.95%, 250=92.63%, 500=0.91% 00:18:29.833 cpu : usr=0.14%, sys=1.22%, ctx=728, majf=0, minf=4097 00:18:29.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:29.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.833 issued rwts: total=3717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.833 job10: (groupid=0, jobs=1): err= 0: pid=91239: Tue Dec 3 00:52:40 2024 00:18:29.833 read: IOPS=353, BW=88.3MiB/s (92.6MB/s)(891MiB/10088msec) 00:18:29.833 slat (usec): min=14, max=116090, avg=2722.03, stdev=9520.73 00:18:29.833 clat (msec): min=28, max=307, avg=178.14, stdev=29.90 00:18:29.833 lat (msec): min=28, max=325, avg=180.87, stdev=31.60 00:18:29.833 clat percentiles (msec): 00:18:29.833 | 1.00th=[ 57], 5.00th=[ 133], 10.00th=[ 146], 20.00th=[ 159], 00:18:29.833 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 00:18:29.833 | 70.00th=[ 194], 80.00th=[ 201], 90.00th=[ 207], 95.00th=[ 215], 00:18:29.833 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 279], 99.95th=[ 279], 00:18:29.833 | 99.99th=[ 309] 00:18:29.833 bw ( KiB/s): min=67960, max=120590, per=5.78%, avg=89529.95, stdev=12194.12, samples=20 00:18:29.833 iops : min= 265, max= 471, avg=349.65, stdev=47.69, samples=20 00:18:29.833 lat (msec) : 50=0.79%, 100=1.52%, 250=97.42%, 500=0.28% 00:18:29.833 cpu : usr=0.10%, sys=1.51%, ctx=630, majf=0, minf=4097 00:18:29.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:18:29.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.833 issued rwts: total=3562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.833 00:18:29.833 Run status group 0 (all jobs): 00:18:29.833 READ: bw=1513MiB/s (1587MB/s), 88.3MiB/s-215MiB/s (92.6MB/s-225MB/s), io=14.9GiB (16.0GB), run=10028-10101msec 00:18:29.833 00:18:29.833 Disk stats (read/write): 00:18:29.833 nvme0n1: ios=7246/0, merge=0/0, ticks=1228791/0, in_queue=1228791, util=97.29% 00:18:29.833 nvme10n1: ios=17085/0, merge=0/0, ticks=1234231/0, in_queue=1234231, util=97.22% 00:18:29.833 nvme1n1: ios=14039/0, merge=0/0, ticks=1237371/0, in_queue=1237371, util=97.98% 00:18:29.833 nvme2n1: ios=7250/0, merge=0/0, ticks=1229762/0, in_queue=1229762, util=97.81% 00:18:29.833 nvme3n1: ios=14416/0, merge=0/0, ticks=1233311/0, in_queue=1233311, util=97.88% 00:18:29.833 nvme4n1: ios=10675/0, merge=0/0, ticks=1240246/0, in_queue=1240246, util=98.03% 00:18:29.833 nvme5n1: ios=11252/0, merge=0/0, ticks=1236169/0, in_queue=1236169, util=98.50% 00:18:29.833 nvme6n1: ios=9266/0, merge=0/0, ticks=1238503/0, in_queue=1238503, util=98.12% 00:18:29.833 nvme7n1: ios=15307/0, merge=0/0, ticks=1236425/0, in_queue=1236425, util=98.61% 00:18:29.834 nvme8n1: ios=7307/0, merge=0/0, ticks=1233497/0, in_queue=1233497, util=98.72% 00:18:29.834 nvme9n1: ios=6987/0, merge=0/0, ticks=1235211/0, in_queue=1235211, util=98.85% 00:18:29.834 00:52:40 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:29.834 [global] 00:18:29.834 thread=1 00:18:29.834 invalidate=1 00:18:29.834 rw=randwrite 00:18:29.834 time_based=1 00:18:29.834 runtime=10 00:18:29.834 ioengine=libaio 00:18:29.834 direct=1 00:18:29.834 bs=262144 00:18:29.834 iodepth=64 00:18:29.834 norandommap=1 00:18:29.834 numjobs=1 00:18:29.834 00:18:29.834 [job0] 00:18:29.834 filename=/dev/nvme0n1 00:18:29.834 [job1] 00:18:29.834 filename=/dev/nvme10n1 00:18:29.834 [job2] 00:18:29.834 filename=/dev/nvme1n1 00:18:29.834 [job3] 00:18:29.834 filename=/dev/nvme2n1 00:18:29.834 [job4] 00:18:29.834 filename=/dev/nvme3n1 00:18:29.834 [job5] 00:18:29.834 filename=/dev/nvme4n1 00:18:29.834 [job6] 00:18:29.834 filename=/dev/nvme5n1 00:18:29.834 [job7] 00:18:29.834 filename=/dev/nvme6n1 00:18:29.834 [job8] 00:18:29.834 filename=/dev/nvme7n1 00:18:29.834 [job9] 00:18:29.834 filename=/dev/nvme8n1 00:18:29.834 [job10] 00:18:29.834 filename=/dev/nvme9n1 00:18:29.834 Could not set queue depth (nvme0n1) 00:18:29.834 Could not set queue depth (nvme10n1) 00:18:29.834 Could not set queue depth (nvme1n1) 00:18:29.834 Could not set queue depth (nvme2n1) 00:18:29.834 Could not set queue depth (nvme3n1) 00:18:29.834 Could not set queue depth (nvme4n1) 00:18:29.834 Could not set queue depth (nvme5n1) 00:18:29.834 Could not set queue depth (nvme6n1) 00:18:29.834 Could not set queue depth (nvme7n1) 00:18:29.834 Could not set queue depth (nvme8n1) 00:18:29.834 Could not set queue depth (nvme9n1) 00:18:29.834 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.834 fio-3.35 00:18:29.834 Starting 11 threads 00:18:39.817 00:18:39.817 job0: (groupid=0, jobs=1): err= 0: pid=91431: Tue Dec 3 00:52:51 2024 00:18:39.817 write: IOPS=645, BW=161MiB/s (169MB/s)(1627MiB/10086msec); 0 zone resets 00:18:39.818 slat (usec): min=26, max=14849, avg=1532.76, stdev=2597.21 00:18:39.818 clat (msec): min=3, max=175, avg=97.64, stdev= 9.80 00:18:39.818 lat (msec): min=3, max=181, avg=99.18, stdev= 9.63 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 89], 5.00th=[ 90], 10.00th=[ 91], 20.00th=[ 94], 00:18:39.818 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 99], 00:18:39.818 | 70.00th=[ 100], 80.00th=[ 101], 90.00th=[ 102], 95.00th=[ 104], 00:18:39.818 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 176], 00:18:39.818 | 99.99th=[ 176] 00:18:39.818 bw ( KiB/s): min=125952, max=172544, per=12.00%, avg=164924.40, stdev=9853.90, samples=20 00:18:39.818 iops : min= 492, max= 674, avg=644.20, stdev=38.50, samples=20 00:18:39.818 lat (msec) : 4=0.02%, 20=0.06%, 50=0.25%, 100=76.44%, 250=23.24% 00:18:39.818 cpu : usr=1.63%, sys=1.52%, ctx=9027, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,6506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job1: (groupid=0, jobs=1): err= 0: pid=91435: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=631, BW=158MiB/s (166MB/s)(1593MiB/10091msec); 0 zone resets 00:18:39.818 slat (usec): min=18, max=18173, avg=1565.32, stdev=2673.05 00:18:39.818 clat (msec): min=3, max=186, avg=99.75, stdev= 9.63 00:18:39.818 lat (msec): min=3, max=186, avg=101.32, stdev= 9.40 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 95], 00:18:39.818 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 102], 00:18:39.818 | 70.00th=[ 102], 80.00th=[ 103], 90.00th=[ 104], 95.00th=[ 106], 00:18:39.818 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 176], 99.95th=[ 182], 00:18:39.818 | 99.99th=[ 186] 00:18:39.818 bw ( KiB/s): min=120832, max=172032, per=11.75%, avg=161494.40, stdev=10362.14, samples=20 00:18:39.818 iops : min= 472, max= 672, avg=630.80, stdev=40.48, samples=20 00:18:39.818 lat (msec) : 4=0.05%, 20=0.03%, 50=0.22%, 100=53.01%, 250=46.69% 00:18:39.818 cpu : usr=0.92%, sys=1.45%, ctx=8505, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,6372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job2: (groupid=0, jobs=1): err= 0: pid=91447: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=476, BW=119MiB/s (125MB/s)(1204MiB/10113msec); 0 zone resets 00:18:39.818 slat (usec): min=27, max=47715, avg=2070.66, stdev=3581.95 00:18:39.818 clat (msec): min=6, max=241, avg=132.24, stdev=15.73 00:18:39.818 lat (msec): min=6, max=241, avg=134.31, stdev=15.57 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 48], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 129], 00:18:39.818 | 30.00th=[ 131], 40.00th=[ 132], 50.00th=[ 132], 60.00th=[ 133], 00:18:39.818 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 140], 95.00th=[ 142], 00:18:39.818 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 234], 99.95th=[ 234], 00:18:39.818 | 99.99th=[ 243] 00:18:39.818 bw ( KiB/s): min=112640, max=126976, per=8.85%, avg=121651.20, stdev=3227.08, samples=20 00:18:39.818 iops : min= 440, max= 496, avg=475.20, stdev=12.61, samples=20 00:18:39.818 lat (msec) : 10=0.23%, 20=0.12%, 50=0.73%, 100=0.42%, 250=98.50% 00:18:39.818 cpu : usr=1.45%, sys=1.57%, ctx=5868, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,4815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job3: (groupid=0, jobs=1): err= 0: pid=91448: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=353, BW=88.3MiB/s (92.6MB/s)(897MiB/10162msec); 0 zone resets 00:18:39.818 slat (usec): min=17, max=29864, avg=2735.24, stdev=4823.37 00:18:39.818 clat (msec): min=32, max=334, avg=178.41, stdev=24.51 00:18:39.818 lat (msec): min=32, max=335, avg=181.14, stdev=24.54 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 65], 5.00th=[ 138], 10.00th=[ 167], 20.00th=[ 174], 00:18:39.818 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:18:39.818 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 194], 00:18:39.818 | 99.00th=[ 230], 99.50th=[ 288], 99.90th=[ 326], 99.95th=[ 334], 00:18:39.818 | 99.99th=[ 334] 00:18:39.818 bw ( KiB/s): min=86016, max=123392, per=6.57%, avg=90256.40, stdev=8174.86, samples=20 00:18:39.818 iops : min= 336, max= 482, avg=352.55, stdev=31.93, samples=20 00:18:39.818 lat (msec) : 50=0.39%, 100=2.06%, 250=96.71%, 500=0.84% 00:18:39.818 cpu : usr=0.68%, sys=1.31%, ctx=3793, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,3589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job4: (groupid=0, jobs=1): err= 0: pid=91449: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=353, BW=88.5MiB/s (92.8MB/s)(900MiB/10175msec); 0 zone resets 00:18:39.818 slat (usec): min=18, max=24321, avg=2775.07, stdev=4848.85 00:18:39.818 clat (msec): min=8, max=346, avg=177.98, stdev=29.59 00:18:39.818 lat (msec): min=8, max=346, avg=180.76, stdev=29.64 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 59], 5.00th=[ 101], 10.00th=[ 169], 20.00th=[ 176], 00:18:39.818 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 186], 00:18:39.818 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 194], 00:18:39.818 | 99.00th=[ 243], 99.50th=[ 288], 99.90th=[ 338], 99.95th=[ 347], 00:18:39.818 | 99.99th=[ 347] 00:18:39.818 bw ( KiB/s): min=83968, max=144160, per=6.59%, avg=90578.40, stdev=12769.63, samples=20 00:18:39.818 iops : min= 328, max= 563, avg=353.80, stdev=49.86, samples=20 00:18:39.818 lat (msec) : 10=0.22%, 20=0.22%, 50=0.44%, 100=4.11%, 250=94.06% 00:18:39.818 lat (msec) : 500=0.94% 00:18:39.818 cpu : usr=0.73%, sys=1.06%, ctx=3596, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,3601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job5: (groupid=0, jobs=1): err= 0: pid=91450: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=355, BW=88.9MiB/s (93.2MB/s)(903MiB/10160msec); 0 zone resets 00:18:39.818 slat (usec): min=20, max=35958, avg=2763.68, stdev=4809.15 00:18:39.818 clat (msec): min=38, max=345, avg=177.23, stdev=25.13 00:18:39.818 lat (msec): min=38, max=345, avg=180.00, stdev=25.03 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 90], 5.00th=[ 111], 10.00th=[ 165], 20.00th=[ 174], 00:18:39.818 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 186], 00:18:39.818 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 190], 95.00th=[ 192], 00:18:39.818 | 99.00th=[ 243], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 347], 00:18:39.818 | 99.99th=[ 347] 00:18:39.818 bw ( KiB/s): min=84480, max=128512, per=6.61%, avg=90792.50, stdev=9417.03, samples=20 00:18:39.818 iops : min= 330, max= 502, avg=354.60, stdev=36.79, samples=20 00:18:39.818 lat (msec) : 50=0.06%, 100=3.99%, 250=95.02%, 500=0.94% 00:18:39.818 cpu : usr=0.87%, sys=1.09%, ctx=3518, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,3611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job6: (groupid=0, jobs=1): err= 0: pid=91451: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=630, BW=158MiB/s (165MB/s)(1591MiB/10088msec); 0 zone resets 00:18:39.818 slat (usec): min=19, max=25989, avg=1566.54, stdev=2673.51 00:18:39.818 clat (msec): min=15, max=183, avg=99.87, stdev=10.02 00:18:39.818 lat (msec): min=15, max=183, avg=101.44, stdev= 9.83 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 95], 00:18:39.818 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 101], 00:18:39.818 | 70.00th=[ 102], 80.00th=[ 103], 90.00th=[ 104], 95.00th=[ 106], 00:18:39.818 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 178], 00:18:39.818 | 99.99th=[ 184] 00:18:39.818 bw ( KiB/s): min=114688, max=172544, per=11.74%, avg=161280.00, stdev=11614.97, samples=20 00:18:39.818 iops : min= 448, max= 674, avg=630.00, stdev=45.37, samples=20 00:18:39.818 lat (msec) : 20=0.06%, 50=0.19%, 100=53.01%, 250=46.74% 00:18:39.818 cpu : usr=1.32%, sys=1.97%, ctx=7473, majf=0, minf=1 00:18:39.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:39.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.818 issued rwts: total=0,6363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.818 job7: (groupid=0, jobs=1): err= 0: pid=91452: Tue Dec 3 00:52:51 2024 00:18:39.818 write: IOPS=645, BW=161MiB/s (169MB/s)(1628MiB/10082msec); 0 zone resets 00:18:39.818 slat (usec): min=19, max=15573, avg=1530.83, stdev=2593.17 00:18:39.818 clat (msec): min=18, max=172, avg=97.54, stdev= 9.40 00:18:39.818 lat (msec): min=18, max=172, avg=99.07, stdev= 9.21 00:18:39.818 clat percentiles (msec): 00:18:39.818 | 1.00th=[ 89], 5.00th=[ 90], 10.00th=[ 91], 20.00th=[ 94], 00:18:39.819 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 97], 00:18:39.819 | 70.00th=[ 100], 80.00th=[ 101], 90.00th=[ 102], 95.00th=[ 104], 00:18:39.819 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 167], 00:18:39.819 | 99.99th=[ 174] 00:18:39.819 bw ( KiB/s): min=126464, max=172544, per=12.01%, avg=165002.40, stdev=9684.41, samples=20 00:18:39.819 iops : min= 494, max= 674, avg=644.40, stdev=37.82, samples=20 00:18:39.819 lat (msec) : 20=0.05%, 50=0.31%, 100=77.44%, 250=22.21% 00:18:39.819 cpu : usr=1.80%, sys=1.93%, ctx=5743, majf=0, minf=1 00:18:39.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:39.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.819 issued rwts: total=0,6511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.819 job8: (groupid=0, jobs=1): err= 0: pid=91453: Tue Dec 3 00:52:51 2024 00:18:39.819 write: IOPS=480, BW=120MiB/s (126MB/s)(1213MiB/10110msec); 0 zone resets 00:18:39.819 slat (usec): min=25, max=12157, avg=2010.63, stdev=3489.20 00:18:39.819 clat (msec): min=7, max=237, avg=131.23, stdev=14.04 00:18:39.819 lat (msec): min=7, max=237, avg=133.24, stdev=13.88 00:18:39.819 clat percentiles (msec): 00:18:39.819 | 1.00th=[ 67], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:18:39.819 | 30.00th=[ 131], 40.00th=[ 132], 50.00th=[ 132], 60.00th=[ 133], 00:18:39.819 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 140], 95.00th=[ 140], 00:18:39.819 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 230], 99.95th=[ 230], 00:18:39.819 | 99.99th=[ 239] 00:18:39.819 bw ( KiB/s): min=117760, max=131584, per=8.92%, avg=122600.00, stdev=3233.22, samples=20 00:18:39.819 iops : min= 460, max= 514, avg=478.90, stdev=12.64, samples=20 00:18:39.819 lat (msec) : 10=0.02%, 20=0.16%, 50=0.43%, 100=1.67%, 250=97.71% 00:18:39.819 cpu : usr=1.24%, sys=1.16%, ctx=6372, majf=0, minf=1 00:18:39.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:39.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.819 issued rwts: total=0,4853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.819 job9: (groupid=0, jobs=1): err= 0: pid=91454: Tue Dec 3 00:52:51 2024 00:18:39.819 write: IOPS=473, BW=118MiB/s (124MB/s)(1196MiB/10101msec); 0 zone resets 00:18:39.819 slat (usec): min=18, max=68082, avg=2085.44, stdev=3652.38 00:18:39.819 clat (msec): min=70, max=227, avg=133.05, stdev= 9.73 00:18:39.819 lat (msec): min=70, max=227, avg=135.14, stdev= 9.20 00:18:39.819 clat percentiles (msec): 00:18:39.819 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 129], 00:18:39.819 | 30.00th=[ 131], 40.00th=[ 132], 50.00th=[ 132], 60.00th=[ 134], 00:18:39.819 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 140], 95.00th=[ 140], 00:18:39.819 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 220], 99.95th=[ 220], 00:18:39.819 | 99.99th=[ 228] 00:18:39.819 bw ( KiB/s): min=94208, max=126976, per=8.79%, avg=120806.40, stdev=6705.51, samples=20 00:18:39.819 iops : min= 368, max= 496, avg=471.90, stdev=26.19, samples=20 00:18:39.819 lat (msec) : 100=0.52%, 250=99.48% 00:18:39.819 cpu : usr=1.38%, sys=1.56%, ctx=5161, majf=0, minf=1 00:18:39.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:39.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.819 issued rwts: total=0,4782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.819 job10: (groupid=0, jobs=1): err= 0: pid=91455: Tue Dec 3 00:52:51 2024 00:18:39.819 write: IOPS=356, BW=89.0MiB/s (93.4MB/s)(905MiB/10161msec); 0 zone resets 00:18:39.819 slat (usec): min=20, max=35806, avg=2685.60, stdev=4827.77 00:18:39.819 clat (msec): min=8, max=339, avg=176.92, stdev=28.10 00:18:39.819 lat (msec): min=8, max=339, avg=179.60, stdev=28.30 00:18:39.819 clat percentiles (msec): 00:18:39.819 | 1.00th=[ 57], 5.00th=[ 129], 10.00th=[ 163], 20.00th=[ 174], 00:18:39.819 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:18:39.819 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 190], 95.00th=[ 194], 00:18:39.819 | 99.00th=[ 236], 99.50th=[ 279], 99.90th=[ 330], 99.95th=[ 338], 00:18:39.819 | 99.99th=[ 338] 00:18:39.819 bw ( KiB/s): min=86016, max=138240, per=6.62%, avg=91024.40, stdev=11467.78, samples=20 00:18:39.819 iops : min= 336, max= 540, avg=355.55, stdev=44.79, samples=20 00:18:39.819 lat (msec) : 10=0.03%, 20=0.14%, 50=0.53%, 100=2.93%, 250=95.55% 00:18:39.819 lat (msec) : 500=0.83% 00:18:39.819 cpu : usr=0.49%, sys=0.99%, ctx=3507, majf=0, minf=1 00:18:39.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:39.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.819 issued rwts: total=0,3619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.819 00:18:39.819 Run status group 0 (all jobs): 00:18:39.819 WRITE: bw=1342MiB/s (1407MB/s), 88.3MiB/s-161MiB/s (92.6MB/s-169MB/s), io=13.3GiB (14.3GB), run=10082-10175msec 00:18:39.819 00:18:39.819 Disk stats (read/write): 00:18:39.819 nvme0n1: ios=49/12872, merge=0/0, ticks=39/1215787, in_queue=1215826, util=97.79% 00:18:39.819 nvme10n1: ios=49/12614, merge=0/0, ticks=44/1215972, in_queue=1216016, util=98.03% 00:18:39.819 nvme1n1: ios=28/9498, merge=0/0, ticks=34/1213410, in_queue=1213444, util=98.02% 00:18:39.819 nvme2n1: ios=15/7033, merge=0/0, ticks=19/1210534, in_queue=1210553, util=97.87% 00:18:39.819 nvme3n1: ios=0/7072, merge=0/0, ticks=0/1211904, in_queue=1211904, util=98.08% 00:18:39.819 nvme4n1: ios=0/7086, merge=0/0, ticks=0/1209307, in_queue=1209307, util=98.13% 00:18:39.819 nvme5n1: ios=0/12585, merge=0/0, ticks=0/1215091, in_queue=1215091, util=98.36% 00:18:39.819 nvme6n1: ios=0/12865, merge=0/0, ticks=0/1214048, in_queue=1214048, util=98.32% 00:18:39.819 nvme7n1: ios=0/9566, merge=0/0, ticks=0/1213710, in_queue=1213710, util=98.60% 00:18:39.819 nvme8n1: ios=0/9411, merge=0/0, ticks=0/1211219, in_queue=1211219, util=98.62% 00:18:39.819 nvme9n1: ios=0/7097, merge=0/0, ticks=0/1211057, in_queue=1211057, util=98.73% 00:18:39.819 00:52:51 -- target/multiconnection.sh@36 -- # sync 00:18:39.819 00:52:51 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:39.819 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.819 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:39.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.819 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:39.819 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:39.819 00:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.819 00:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.819 00:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.819 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 00:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.819 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.819 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:39.819 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:39.819 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:39.819 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:39.819 00:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.819 00:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:39.819 00:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.819 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 00:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.819 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.819 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:39.819 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:39.819 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:39.819 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:39.819 00:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.819 00:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:39.819 00:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.819 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 00:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.819 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.819 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:39.819 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:39.819 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:39.819 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:39.819 00:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.819 00:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.819 00:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:39.819 00:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.819 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 00:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:39.820 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:39.820 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:39.820 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.820 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.820 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:39.820 00:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.820 00:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:39.820 00:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.820 00:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:39.820 00:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.820 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:39.820 00:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:39.820 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:39.820 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:39.820 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.820 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.820 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:39.820 00:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.820 00:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:39.820 00:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.820 00:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:39.820 00:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.820 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:39.820 00:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:39.820 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:39.820 00:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:39.820 00:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.820 00:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.820 00:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:39.820 00:52:52 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.820 00:52:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:39.820 00:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.820 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:18:39.820 00:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:39.820 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:39.820 00:52:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:39.820 00:52:52 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.820 00:52:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:39.820 00:52:52 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.820 00:52:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:39.820 00:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.820 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:18:39.820 00:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:39.820 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:39.820 00:52:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:39.820 00:52:52 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.820 00:52:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:39.820 00:52:52 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.820 00:52:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:39.820 00:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.820 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:18:39.820 00:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:39.820 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:39.820 00:52:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:39.820 00:52:52 -- common/autotest_common.sh@1208 -- # local i=0 00:18:39.820 00:52:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:39.820 00:52:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:39.820 00:52:52 -- common/autotest_common.sh@1220 -- # return 0 00:18:39.820 00:52:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:39.820 00:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.820 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:18:39.820 00:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.820 00:52:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.820 00:52:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:40.078 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:40.078 00:52:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:40.078 00:52:52 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.078 00:52:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.078 00:52:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:40.078 00:52:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.078 00:52:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:40.078 00:52:52 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.078 00:52:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:40.078 00:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.078 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:18:40.078 00:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.078 00:52:52 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:40.078 00:52:52 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:40.078 00:52:52 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:40.078 00:52:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:40.078 00:52:52 -- nvmf/common.sh@116 -- # sync 00:18:40.078 00:52:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:40.078 00:52:52 -- nvmf/common.sh@119 -- # set +e 00:18:40.078 00:52:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:40.078 00:52:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:40.078 rmmod nvme_tcp 00:18:40.078 rmmod nvme_fabrics 00:18:40.078 rmmod nvme_keyring 00:18:40.078 00:52:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.078 00:52:52 -- nvmf/common.sh@123 -- # set -e 00:18:40.078 00:52:52 -- nvmf/common.sh@124 -- # return 0 00:18:40.078 00:52:52 -- nvmf/common.sh@477 -- # '[' -n 90746 ']' 00:18:40.078 00:52:52 -- nvmf/common.sh@478 -- # killprocess 90746 00:18:40.078 00:52:52 -- common/autotest_common.sh@936 -- # '[' -z 90746 ']' 00:18:40.078 00:52:52 -- common/autotest_common.sh@940 -- # kill -0 90746 00:18:40.078 00:52:52 -- common/autotest_common.sh@941 -- # uname 00:18:40.078 00:52:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.078 00:52:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90746 00:18:40.078 killing process with pid 90746 00:18:40.078 00:52:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:40.078 00:52:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:40.078 00:52:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90746' 00:18:40.078 00:52:52 -- common/autotest_common.sh@955 -- # kill 90746 00:18:40.078 00:52:52 -- common/autotest_common.sh@960 -- # wait 90746 00:18:40.644 00:52:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:40.644 00:52:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.644 00:52:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.644 00:52:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.644 00:52:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.644 00:52:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.644 00:52:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.644 00:52:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.644 00:52:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:40.644 00:18:40.644 real 0m50.105s 00:18:40.644 user 2m51.052s 00:18:40.644 sys 0m23.584s 00:18:40.644 00:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:40.644 00:52:53 -- common/autotest_common.sh@10 -- # set +x 00:18:40.644 ************************************ 00:18:40.644 END TEST nvmf_multiconnection 00:18:40.644 ************************************ 00:18:40.902 00:52:53 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:40.902 00:52:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:40.902 00:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:40.902 00:52:53 -- common/autotest_common.sh@10 -- # set +x 00:18:40.902 ************************************ 00:18:40.902 START TEST nvmf_initiator_timeout 00:18:40.902 ************************************ 00:18:40.902 00:52:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:40.902 * Looking for test storage... 00:18:40.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:40.902 00:52:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:40.902 00:52:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:40.902 00:52:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:40.902 00:52:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:40.902 00:52:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:40.902 00:52:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.902 00:52:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.902 00:52:53 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.902 00:52:53 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.902 00:52:53 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.902 00:52:53 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.902 00:52:53 -- scripts/common.sh@337 -- # local 'op=<' 00:18:40.902 00:52:53 -- scripts/common.sh@339 -- # ver1_l=2 00:18:40.902 00:52:53 -- scripts/common.sh@340 -- # ver2_l=1 00:18:40.902 00:52:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.902 00:52:53 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.902 00:52:53 -- scripts/common.sh@344 -- # : 1 00:18:40.902 00:52:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.902 00:52:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.902 00:52:53 -- scripts/common.sh@364 -- # decimal 1 00:18:40.902 00:52:53 -- scripts/common.sh@352 -- # local d=1 00:18:40.902 00:52:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.902 00:52:53 -- scripts/common.sh@354 -- # echo 1 00:18:40.902 00:52:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.902 00:52:53 -- scripts/common.sh@365 -- # decimal 2 00:18:40.902 00:52:53 -- scripts/common.sh@352 -- # local d=2 00:18:40.902 00:52:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.902 00:52:53 -- scripts/common.sh@354 -- # echo 2 00:18:40.902 00:52:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:40.902 00:52:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.902 00:52:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.902 00:52:53 -- scripts/common.sh@367 -- # return 0 00:18:40.902 00:52:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.902 00:52:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.902 --rc genhtml_branch_coverage=1 00:18:40.902 --rc genhtml_function_coverage=1 00:18:40.902 --rc genhtml_legend=1 00:18:40.902 --rc geninfo_all_blocks=1 00:18:40.902 --rc geninfo_unexecuted_blocks=1 00:18:40.902 00:18:40.902 ' 00:18:40.902 00:52:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.902 --rc genhtml_branch_coverage=1 00:18:40.902 --rc genhtml_function_coverage=1 00:18:40.902 --rc genhtml_legend=1 00:18:40.902 --rc geninfo_all_blocks=1 00:18:40.902 --rc geninfo_unexecuted_blocks=1 00:18:40.902 00:18:40.902 ' 00:18:40.902 00:52:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.902 --rc genhtml_branch_coverage=1 00:18:40.902 --rc genhtml_function_coverage=1 00:18:40.902 --rc genhtml_legend=1 00:18:40.902 --rc geninfo_all_blocks=1 00:18:40.902 --rc geninfo_unexecuted_blocks=1 00:18:40.902 00:18:40.902 ' 00:18:40.902 00:52:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.902 --rc genhtml_branch_coverage=1 00:18:40.902 --rc genhtml_function_coverage=1 00:18:40.902 --rc genhtml_legend=1 00:18:40.902 --rc geninfo_all_blocks=1 00:18:40.902 --rc geninfo_unexecuted_blocks=1 00:18:40.902 00:18:40.902 ' 00:18:40.902 00:52:53 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.902 00:52:53 -- nvmf/common.sh@7 -- # uname -s 00:18:40.902 00:52:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.902 00:52:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.902 00:52:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.902 00:52:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.902 00:52:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.902 00:52:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.902 00:52:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.902 00:52:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.902 00:52:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.902 00:52:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.902 00:52:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:18:40.902 00:52:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:18:40.902 00:52:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.902 00:52:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.902 00:52:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.902 00:52:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.902 00:52:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.902 00:52:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.902 00:52:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.903 00:52:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.903 00:52:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.903 00:52:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.903 00:52:53 -- paths/export.sh@5 -- # export PATH 00:18:40.903 00:52:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.903 00:52:53 -- nvmf/common.sh@46 -- # : 0 00:18:40.903 00:52:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.903 00:52:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.903 00:52:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.903 00:52:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.903 00:52:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.903 00:52:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.903 00:52:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.903 00:52:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.903 00:52:53 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.903 00:52:53 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.903 00:52:53 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:40.903 00:52:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.903 00:52:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.903 00:52:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.903 00:52:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.903 00:52:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.903 00:52:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.903 00:52:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.903 00:52:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.903 00:52:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.903 00:52:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.903 00:52:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.903 00:52:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.903 00:52:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.903 00:52:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.903 00:52:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.903 00:52:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.903 00:52:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.903 00:52:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.903 00:52:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.903 00:52:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.903 00:52:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.903 00:52:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.903 00:52:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.903 00:52:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.903 00:52:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.903 00:52:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.903 00:52:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.903 00:52:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.903 Cannot find device "nvmf_tgt_br" 00:18:40.903 00:52:53 -- nvmf/common.sh@154 -- # true 00:18:40.903 00:52:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.160 Cannot find device "nvmf_tgt_br2" 00:18:41.160 00:52:53 -- nvmf/common.sh@155 -- # true 00:18:41.160 00:52:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:41.160 00:52:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:41.160 Cannot find device "nvmf_tgt_br" 00:18:41.160 00:52:53 -- nvmf/common.sh@157 -- # true 00:18:41.160 00:52:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:41.160 Cannot find device "nvmf_tgt_br2" 00:18:41.160 00:52:53 -- nvmf/common.sh@158 -- # true 00:18:41.160 00:52:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:41.160 00:52:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:41.160 00:52:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.160 00:52:53 -- nvmf/common.sh@161 -- # true 00:18:41.160 00:52:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.160 00:52:53 -- nvmf/common.sh@162 -- # true 00:18:41.160 00:52:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.160 00:52:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.160 00:52:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.160 00:52:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.160 00:52:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.160 00:52:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.160 00:52:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.160 00:52:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.160 00:52:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.160 00:52:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:41.160 00:52:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:41.160 00:52:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:41.160 00:52:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:41.160 00:52:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.160 00:52:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.160 00:52:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.160 00:52:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:41.160 00:52:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:41.160 00:52:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.160 00:52:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.160 00:52:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.418 00:52:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.418 00:52:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.418 00:52:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:41.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:41.418 00:18:41.418 --- 10.0.0.2 ping statistics --- 00:18:41.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.418 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:41.418 00:52:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:41.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:18:41.418 00:18:41.418 --- 10.0.0.3 ping statistics --- 00:18:41.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.418 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:41.418 00:52:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:41.418 00:18:41.418 --- 10.0.0.1 ping statistics --- 00:18:41.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.418 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:41.418 00:52:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.418 00:52:53 -- nvmf/common.sh@421 -- # return 0 00:18:41.418 00:52:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:41.418 00:52:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.418 00:52:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:41.418 00:52:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:41.418 00:52:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.418 00:52:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:41.418 00:52:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:41.418 00:52:53 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:41.418 00:52:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:41.418 00:52:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.418 00:52:53 -- common/autotest_common.sh@10 -- # set +x 00:18:41.418 00:52:53 -- nvmf/common.sh@469 -- # nvmfpid=91832 00:18:41.418 00:52:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.418 00:52:53 -- nvmf/common.sh@470 -- # waitforlisten 91832 00:18:41.418 00:52:53 -- common/autotest_common.sh@829 -- # '[' -z 91832 ']' 00:18:41.418 00:52:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.418 00:52:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.418 00:52:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.418 00:52:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.418 00:52:53 -- common/autotest_common.sh@10 -- # set +x 00:18:41.418 [2024-12-03 00:52:53.780375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:41.418 [2024-12-03 00:52:53.780492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.418 [2024-12-03 00:52:53.920218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.675 [2024-12-03 00:52:53.993724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:41.675 [2024-12-03 00:52:53.993896] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.675 [2024-12-03 00:52:53.993911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.675 [2024-12-03 00:52:53.993921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.675 [2024-12-03 00:52:53.994407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.675 [2024-12-03 00:52:53.994547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.675 [2024-12-03 00:52:53.994639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.675 [2024-12-03 00:52:53.994649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.609 00:52:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.609 00:52:54 -- common/autotest_common.sh@862 -- # return 0 00:18:42.609 00:52:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:42.609 00:52:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 00:52:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.609 00:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 Malloc0 00:18:42.609 00:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:42.609 00:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 Delay0 00:18:42.609 00:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.609 00:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 [2024-12-03 00:52:54.893250] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.609 00:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.609 00:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 00:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:42.609 00:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 00:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.609 00:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.609 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 [2024-12-03 00:52:54.921537] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.609 00:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.609 00:52:54 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.609 00:52:55 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:42.609 00:52:55 -- common/autotest_common.sh@1187 -- # local i=0 00:18:42.609 00:52:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.609 00:52:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:42.609 00:52:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:45.193 00:52:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:45.193 00:52:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:45.193 00:52:57 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.193 00:52:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:45.193 00:52:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.193 00:52:57 -- common/autotest_common.sh@1197 -- # return 0 00:18:45.193 00:52:57 -- target/initiator_timeout.sh@35 -- # fio_pid=91910 00:18:45.193 00:52:57 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:45.193 00:52:57 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:45.193 [global] 00:18:45.193 thread=1 00:18:45.193 invalidate=1 00:18:45.193 rw=write 00:18:45.193 time_based=1 00:18:45.193 runtime=60 00:18:45.193 ioengine=libaio 00:18:45.193 direct=1 00:18:45.193 bs=4096 00:18:45.193 iodepth=1 00:18:45.193 norandommap=0 00:18:45.193 numjobs=1 00:18:45.193 00:18:45.193 verify_dump=1 00:18:45.193 verify_backlog=512 00:18:45.193 verify_state_save=0 00:18:45.193 do_verify=1 00:18:45.193 verify=crc32c-intel 00:18:45.193 [job0] 00:18:45.193 filename=/dev/nvme0n1 00:18:45.193 Could not set queue depth (nvme0n1) 00:18:45.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:45.193 fio-3.35 00:18:45.193 Starting 1 thread 00:18:47.727 00:53:00 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:47.727 00:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.727 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.727 true 00:18:47.727 00:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.727 00:53:00 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:47.727 00:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.727 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.727 true 00:18:47.727 00:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.727 00:53:00 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:47.727 00:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.727 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.727 true 00:18:47.728 00:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.728 00:53:00 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:47.728 00:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.728 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.728 true 00:18:47.728 00:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.728 00:53:00 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:51.017 00:53:03 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:51.017 00:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.017 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:18:51.017 true 00:18:51.017 00:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.017 00:53:03 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:51.017 00:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.017 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:18:51.017 true 00:18:51.017 00:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.017 00:53:03 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:51.018 00:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.018 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:18:51.018 true 00:18:51.018 00:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.018 00:53:03 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:51.018 00:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.018 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:18:51.018 true 00:18:51.018 00:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.018 00:53:03 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:51.018 00:53:03 -- target/initiator_timeout.sh@54 -- # wait 91910 00:19:47.240 00:19:47.240 job0: (groupid=0, jobs=1): err= 0: pid=91935: Tue Dec 3 00:53:57 2024 00:19:47.240 read: IOPS=817, BW=3268KiB/s (3347kB/s)(191MiB/60000msec) 00:19:47.240 slat (usec): min=11, max=226, avg=13.73, stdev= 4.12 00:19:47.240 clat (usec): min=39, max=2039, avg=202.01, stdev=23.39 00:19:47.240 lat (usec): min=160, max=2061, avg=215.73, stdev=24.11 00:19:47.240 clat percentiles (usec): 00:19:47.240 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:19:47.240 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:19:47.240 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 241], 00:19:47.240 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 330], 99.95th=[ 371], 00:19:47.240 | 99.99th=[ 832] 00:19:47.240 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:19:47.240 slat (usec): min=15, max=10496, avg=19.61, stdev=59.16 00:19:47.240 clat (usec): min=71, max=40480k, avg=983.23, stdev=182584.16 00:19:47.240 lat (usec): min=137, max=40480k, avg=1002.84, stdev=182584.16 00:19:47.240 clat percentiles (usec): 00:19:47.240 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:19:47.240 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:19:47.240 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:19:47.240 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 269], 99.95th=[ 310], 00:19:47.240 | 99.99th=[ 922] 00:19:47.240 bw ( KiB/s): min= 4304, max=12288, per=100.00%, avg=9872.41, stdev=1819.72, samples=39 00:19:47.240 iops : min= 1076, max= 3072, avg=2468.10, stdev=454.93, samples=39 00:19:47.240 lat (usec) : 50=0.01%, 100=0.01%, 250=98.77%, 500=1.20%, 750=0.01% 00:19:47.240 lat (usec) : 1000=0.01% 00:19:47.240 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:47.240 cpu : usr=0.50%, sys=1.99%, ctx=98192, majf=0, minf=5 00:19:47.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.240 issued rwts: total=49023,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.240 00:19:47.240 Run status group 0 (all jobs): 00:19:47.240 READ: bw=3268KiB/s (3347kB/s), 3268KiB/s-3268KiB/s (3347kB/s-3347kB/s), io=191MiB (201MB), run=60000-60000msec 00:19:47.240 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:19:47.240 00:19:47.240 Disk stats (read/write): 00:19:47.240 nvme0n1: ios=48908/49042, merge=0/0, ticks=10173/8256, in_queue=18429, util=99.76% 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:47.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:47.241 00:53:57 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.241 00:53:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.241 00:53:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.241 00:53:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.241 00:53:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.241 nvmf hotplug test: fio successful as expected 00:19:47.241 00:53:57 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.241 00:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.241 00:53:57 -- common/autotest_common.sh@10 -- # set +x 00:19:47.241 00:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:47.241 00:53:57 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:47.241 00:53:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.241 00:53:57 -- nvmf/common.sh@116 -- # sync 00:19:47.241 00:53:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.241 00:53:57 -- nvmf/common.sh@119 -- # set +e 00:19:47.241 00:53:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.241 00:53:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.241 rmmod nvme_tcp 00:19:47.241 rmmod nvme_fabrics 00:19:47.241 rmmod nvme_keyring 00:19:47.241 00:53:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.241 00:53:57 -- nvmf/common.sh@123 -- # set -e 00:19:47.241 00:53:57 -- nvmf/common.sh@124 -- # return 0 00:19:47.241 00:53:57 -- nvmf/common.sh@477 -- # '[' -n 91832 ']' 00:19:47.241 00:53:57 -- nvmf/common.sh@478 -- # killprocess 91832 00:19:47.241 00:53:57 -- common/autotest_common.sh@936 -- # '[' -z 91832 ']' 00:19:47.241 00:53:57 -- common/autotest_common.sh@940 -- # kill -0 91832 00:19:47.241 00:53:57 -- common/autotest_common.sh@941 -- # uname 00:19:47.241 00:53:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.241 00:53:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91832 00:19:47.241 killing process with pid 91832 00:19:47.241 00:53:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.241 00:53:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.241 00:53:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91832' 00:19:47.241 00:53:57 -- common/autotest_common.sh@955 -- # kill 91832 00:19:47.241 00:53:57 -- common/autotest_common.sh@960 -- # wait 91832 00:19:47.241 00:53:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:47.241 00:53:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:47.241 00:53:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:47.241 00:53:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.241 00:53:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:47.241 00:53:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.241 00:53:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.241 00:53:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.241 00:53:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:47.241 00:19:47.241 real 1m4.736s 00:19:47.241 user 4m7.983s 00:19:47.241 sys 0m7.374s 00:19:47.241 00:53:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:47.241 00:53:57 -- common/autotest_common.sh@10 -- # set +x 00:19:47.241 ************************************ 00:19:47.241 END TEST nvmf_initiator_timeout 00:19:47.241 ************************************ 00:19:47.241 00:53:57 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:47.241 00:53:57 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:47.241 00:53:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.241 00:53:57 -- common/autotest_common.sh@10 -- # set +x 00:19:47.241 00:53:58 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:47.241 00:53:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.241 00:53:58 -- common/autotest_common.sh@10 -- # set +x 00:19:47.241 00:53:58 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:47.241 00:53:58 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:47.241 00:53:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:47.241 00:53:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.241 00:53:58 -- common/autotest_common.sh@10 -- # set +x 00:19:47.241 ************************************ 00:19:47.241 START TEST nvmf_multicontroller 00:19:47.241 ************************************ 00:19:47.241 00:53:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:47.241 * Looking for test storage... 00:19:47.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.241 00:53:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:47.241 00:53:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:47.241 00:53:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:47.241 00:53:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:47.241 00:53:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:47.241 00:53:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:47.241 00:53:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:47.241 00:53:58 -- scripts/common.sh@335 -- # IFS=.-: 00:19:47.241 00:53:58 -- scripts/common.sh@335 -- # read -ra ver1 00:19:47.241 00:53:58 -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.241 00:53:58 -- scripts/common.sh@336 -- # read -ra ver2 00:19:47.241 00:53:58 -- scripts/common.sh@337 -- # local 'op=<' 00:19:47.241 00:53:58 -- scripts/common.sh@339 -- # ver1_l=2 00:19:47.241 00:53:58 -- scripts/common.sh@340 -- # ver2_l=1 00:19:47.241 00:53:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:47.241 00:53:58 -- scripts/common.sh@343 -- # case "$op" in 00:19:47.241 00:53:58 -- scripts/common.sh@344 -- # : 1 00:19:47.241 00:53:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:47.241 00:53:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.241 00:53:58 -- scripts/common.sh@364 -- # decimal 1 00:19:47.241 00:53:58 -- scripts/common.sh@352 -- # local d=1 00:19:47.241 00:53:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.241 00:53:58 -- scripts/common.sh@354 -- # echo 1 00:19:47.241 00:53:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:47.241 00:53:58 -- scripts/common.sh@365 -- # decimal 2 00:19:47.241 00:53:58 -- scripts/common.sh@352 -- # local d=2 00:19:47.241 00:53:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.241 00:53:58 -- scripts/common.sh@354 -- # echo 2 00:19:47.241 00:53:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:47.241 00:53:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:47.241 00:53:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:47.241 00:53:58 -- scripts/common.sh@367 -- # return 0 00:19:47.241 00:53:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.241 00:53:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:47.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.241 --rc genhtml_branch_coverage=1 00:19:47.241 --rc genhtml_function_coverage=1 00:19:47.241 --rc genhtml_legend=1 00:19:47.241 --rc geninfo_all_blocks=1 00:19:47.241 --rc geninfo_unexecuted_blocks=1 00:19:47.241 00:19:47.241 ' 00:19:47.241 00:53:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:47.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.241 --rc genhtml_branch_coverage=1 00:19:47.241 --rc genhtml_function_coverage=1 00:19:47.241 --rc genhtml_legend=1 00:19:47.241 --rc geninfo_all_blocks=1 00:19:47.241 --rc geninfo_unexecuted_blocks=1 00:19:47.241 00:19:47.241 ' 00:19:47.241 00:53:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:47.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.241 --rc genhtml_branch_coverage=1 00:19:47.241 --rc genhtml_function_coverage=1 00:19:47.241 --rc genhtml_legend=1 00:19:47.241 --rc geninfo_all_blocks=1 00:19:47.241 --rc geninfo_unexecuted_blocks=1 00:19:47.241 00:19:47.241 ' 00:19:47.241 00:53:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:47.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.241 --rc genhtml_branch_coverage=1 00:19:47.241 --rc genhtml_function_coverage=1 00:19:47.241 --rc genhtml_legend=1 00:19:47.241 --rc geninfo_all_blocks=1 00:19:47.241 --rc geninfo_unexecuted_blocks=1 00:19:47.241 00:19:47.241 ' 00:19:47.241 00:53:58 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.241 00:53:58 -- nvmf/common.sh@7 -- # uname -s 00:19:47.241 00:53:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.241 00:53:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.241 00:53:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.241 00:53:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.241 00:53:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.241 00:53:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.241 00:53:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.241 00:53:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.241 00:53:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.241 00:53:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.241 00:53:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:47.241 00:53:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:47.241 00:53:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.241 00:53:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.241 00:53:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.241 00:53:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.241 00:53:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.241 00:53:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.241 00:53:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.242 00:53:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.242 00:53:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.242 00:53:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.242 00:53:58 -- paths/export.sh@5 -- # export PATH 00:19:47.242 00:53:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.242 00:53:58 -- nvmf/common.sh@46 -- # : 0 00:19:47.242 00:53:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.242 00:53:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.242 00:53:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.242 00:53:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.242 00:53:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.242 00:53:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.242 00:53:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.242 00:53:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.242 00:53:58 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.242 00:53:58 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.242 00:53:58 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:47.242 00:53:58 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:47.242 00:53:58 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.242 00:53:58 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:47.242 00:53:58 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:47.242 00:53:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.242 00:53:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.242 00:53:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.242 00:53:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.242 00:53:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.242 00:53:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.242 00:53:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.242 00:53:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.242 00:53:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:47.242 00:53:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.242 00:53:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.242 00:53:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.242 00:53:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:47.242 00:53:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.242 00:53:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.242 00:53:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.242 00:53:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.242 00:53:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.242 00:53:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.242 00:53:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.242 00:53:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.242 00:53:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:47.242 00:53:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:47.242 Cannot find device "nvmf_tgt_br" 00:19:47.242 00:53:58 -- nvmf/common.sh@154 -- # true 00:19:47.242 00:53:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.242 Cannot find device "nvmf_tgt_br2" 00:19:47.242 00:53:58 -- nvmf/common.sh@155 -- # true 00:19:47.242 00:53:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:47.242 00:53:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:47.242 Cannot find device "nvmf_tgt_br" 00:19:47.242 00:53:58 -- nvmf/common.sh@157 -- # true 00:19:47.242 00:53:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:47.242 Cannot find device "nvmf_tgt_br2" 00:19:47.242 00:53:58 -- nvmf/common.sh@158 -- # true 00:19:47.242 00:53:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:47.242 00:53:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:47.242 00:53:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.242 00:53:58 -- nvmf/common.sh@161 -- # true 00:19:47.242 00:53:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.242 00:53:58 -- nvmf/common.sh@162 -- # true 00:19:47.242 00:53:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.242 00:53:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.242 00:53:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.242 00:53:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.242 00:53:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.242 00:53:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.242 00:53:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.242 00:53:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.242 00:53:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.242 00:53:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:47.242 00:53:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:47.242 00:53:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:47.242 00:53:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:47.242 00:53:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.242 00:53:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.242 00:53:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.242 00:53:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:47.242 00:53:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:47.242 00:53:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.242 00:53:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.242 00:53:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.242 00:53:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.242 00:53:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.242 00:53:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:47.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:47.242 00:19:47.242 --- 10.0.0.2 ping statistics --- 00:19:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.242 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:47.242 00:53:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:47.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:47.242 00:19:47.242 --- 10.0.0.3 ping statistics --- 00:19:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.242 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:47.242 00:53:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:47.242 00:19:47.242 --- 10.0.0.1 ping statistics --- 00:19:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.242 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:47.242 00:53:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.242 00:53:58 -- nvmf/common.sh@421 -- # return 0 00:19:47.242 00:53:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:47.242 00:53:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.242 00:53:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:47.242 00:53:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.242 00:53:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:47.242 00:53:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:47.242 00:53:58 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:47.242 00:53:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:47.242 00:53:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.242 00:53:58 -- common/autotest_common.sh@10 -- # set +x 00:19:47.242 00:53:58 -- nvmf/common.sh@469 -- # nvmfpid=92776 00:19:47.242 00:53:58 -- nvmf/common.sh@470 -- # waitforlisten 92776 00:19:47.242 00:53:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:47.242 00:53:58 -- common/autotest_common.sh@829 -- # '[' -z 92776 ']' 00:19:47.242 00:53:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.242 00:53:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.242 00:53:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.242 00:53:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.243 00:53:58 -- common/autotest_common.sh@10 -- # set +x 00:19:47.243 [2024-12-03 00:53:58.625190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:47.243 [2024-12-03 00:53:58.625269] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.243 [2024-12-03 00:53:58.759741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.243 [2024-12-03 00:53:58.833224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:47.243 [2024-12-03 00:53:58.833369] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.243 [2024-12-03 00:53:58.833381] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.243 [2024-12-03 00:53:58.833390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.243 [2024-12-03 00:53:58.833936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.243 [2024-12-03 00:53:58.834126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.243 [2024-12-03 00:53:58.834135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.243 00:53:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.243 00:53:59 -- common/autotest_common.sh@862 -- # return 0 00:19:47.243 00:53:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:47.243 00:53:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.243 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.243 00:53:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.243 00:53:59 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.243 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.243 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.243 [2024-12-03 00:53:59.684480] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.243 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.243 00:53:59 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.243 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.243 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.243 Malloc0 00:19:47.243 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.243 00:53:59 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.243 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.243 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.243 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.243 00:53:59 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.243 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.243 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 [2024-12-03 00:53:59.768229] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 [2024-12-03 00:53:59.780129] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 Malloc1 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:47.502 00:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.502 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:47.502 00:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.502 00:53:59 -- host/multicontroller.sh@44 -- # bdevperf_pid=92834 00:19:47.502 00:53:59 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.502 00:53:59 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:47.502 00:53:59 -- host/multicontroller.sh@47 -- # waitforlisten 92834 /var/tmp/bdevperf.sock 00:19:47.503 00:53:59 -- common/autotest_common.sh@829 -- # '[' -z 92834 ']' 00:19:47.503 00:53:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.503 00:53:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.503 00:53:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.503 00:53:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.503 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 00:54:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.878 00:54:00 -- common/autotest_common.sh@862 -- # return 0 00:19:48.878 00:54:00 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:48.878 00:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.878 00:54:00 -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 NVMe0n1 00:19:48.878 00:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.878 00:54:01 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.878 00:54:01 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:48.878 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.878 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 00:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.878 1 00:19:48.878 00:54:01 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:48.878 00:54:01 -- common/autotest_common.sh@650 -- # local es=0 00:19:48.878 00:54:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:48.878 00:54:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.878 00:54:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:48.878 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.878 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 2024/12/03 00:54:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:48.878 request: 00:19:48.878 { 00:19:48.878 "method": "bdev_nvme_attach_controller", 00:19:48.878 "params": { 00:19:48.878 "name": "NVMe0", 00:19:48.878 "trtype": "tcp", 00:19:48.878 "traddr": "10.0.0.2", 00:19:48.878 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:48.878 "hostaddr": "10.0.0.2", 00:19:48.878 "hostsvcid": "60000", 00:19:48.878 "adrfam": "ipv4", 00:19:48.878 "trsvcid": "4420", 00:19:48.878 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:48.878 } 00:19:48.878 } 00:19:48.878 Got JSON-RPC error response 00:19:48.878 GoRPCClient: error on JSON-RPC call 00:19:48.878 00:54:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:48.878 00:54:01 -- common/autotest_common.sh@653 -- # es=1 00:19:48.878 00:54:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.878 00:54:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.878 00:54:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.878 00:54:01 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:48.878 00:54:01 -- common/autotest_common.sh@650 -- # local es=0 00:19:48.878 00:54:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:48.878 00:54:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.878 00:54:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:48.878 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.878 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 2024/12/03 00:54:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:48.878 request: 00:19:48.878 { 00:19:48.878 "method": "bdev_nvme_attach_controller", 00:19:48.878 "params": { 00:19:48.878 "name": "NVMe0", 00:19:48.878 "trtype": "tcp", 00:19:48.878 "traddr": "10.0.0.2", 00:19:48.878 "hostaddr": "10.0.0.2", 00:19:48.878 "hostsvcid": "60000", 00:19:48.878 "adrfam": "ipv4", 00:19:48.878 "trsvcid": "4420", 00:19:48.878 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:48.878 } 00:19:48.878 } 00:19:48.878 Got JSON-RPC error response 00:19:48.878 GoRPCClient: error on JSON-RPC call 00:19:48.878 00:54:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:48.878 00:54:01 -- common/autotest_common.sh@653 -- # es=1 00:19:48.878 00:54:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.878 00:54:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.878 00:54:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.878 00:54:01 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:48.878 00:54:01 -- common/autotest_common.sh@650 -- # local es=0 00:19:48.878 00:54:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:48.878 00:54:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:48.878 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.879 00:54:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:48.879 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.879 00:54:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:48.879 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.879 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.879 2024/12/03 00:54:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:48.879 request: 00:19:48.879 { 00:19:48.879 "method": "bdev_nvme_attach_controller", 00:19:48.879 "params": { 00:19:48.879 "name": "NVMe0", 00:19:48.879 "trtype": "tcp", 00:19:48.879 "traddr": "10.0.0.2", 00:19:48.879 "hostaddr": "10.0.0.2", 00:19:48.879 "hostsvcid": "60000", 00:19:48.879 "adrfam": "ipv4", 00:19:48.879 "trsvcid": "4420", 00:19:48.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.879 "multipath": "disable" 00:19:48.879 } 00:19:48.879 } 00:19:48.879 Got JSON-RPC error response 00:19:48.879 GoRPCClient: error on JSON-RPC call 00:19:48.879 00:54:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:48.879 00:54:01 -- common/autotest_common.sh@653 -- # es=1 00:19:48.879 00:54:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.879 00:54:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.879 00:54:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.879 00:54:01 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:48.879 00:54:01 -- common/autotest_common.sh@650 -- # local es=0 00:19:48.879 00:54:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:48.879 00:54:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:48.879 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.879 00:54:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:48.879 00:54:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.879 00:54:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:48.879 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.879 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.879 2024/12/03 00:54:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:48.879 request: 00:19:48.879 { 00:19:48.879 "method": "bdev_nvme_attach_controller", 00:19:48.879 "params": { 00:19:48.879 "name": "NVMe0", 00:19:48.879 "trtype": "tcp", 00:19:48.879 "traddr": "10.0.0.2", 00:19:48.879 "hostaddr": "10.0.0.2", 00:19:48.879 "hostsvcid": "60000", 00:19:48.879 "adrfam": "ipv4", 00:19:48.879 "trsvcid": "4420", 00:19:48.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.879 "multipath": "failover" 00:19:48.879 } 00:19:48.879 } 00:19:48.879 Got JSON-RPC error response 00:19:48.879 GoRPCClient: error on JSON-RPC call 00:19:48.879 00:54:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:48.879 00:54:01 -- common/autotest_common.sh@653 -- # es=1 00:19:48.879 00:54:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.879 00:54:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.879 00:54:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.879 00:54:01 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.879 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.879 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.879 00:19:48.879 00:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.879 00:54:01 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.879 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.879 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.879 00:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.879 00:54:01 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:48.879 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.879 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.879 00:19:48.879 00:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.879 00:54:01 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.879 00:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.879 00:54:01 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:48.879 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.879 00:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.879 00:54:01 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:48.879 00:54:01 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.255 0 00:19:50.255 00:54:02 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:50.255 00:54:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.255 00:54:02 -- common/autotest_common.sh@10 -- # set +x 00:19:50.255 00:54:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.255 00:54:02 -- host/multicontroller.sh@100 -- # killprocess 92834 00:19:50.255 00:54:02 -- common/autotest_common.sh@936 -- # '[' -z 92834 ']' 00:19:50.255 00:54:02 -- common/autotest_common.sh@940 -- # kill -0 92834 00:19:50.255 00:54:02 -- common/autotest_common.sh@941 -- # uname 00:19:50.255 00:54:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.255 00:54:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92834 00:19:50.255 00:54:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:50.255 00:54:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:50.255 killing process with pid 92834 00:19:50.255 00:54:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92834' 00:19:50.255 00:54:02 -- common/autotest_common.sh@955 -- # kill 92834 00:19:50.255 00:54:02 -- common/autotest_common.sh@960 -- # wait 92834 00:19:50.255 00:54:02 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.255 00:54:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.255 00:54:02 -- common/autotest_common.sh@10 -- # set +x 00:19:50.255 00:54:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.255 00:54:02 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:50.255 00:54:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.255 00:54:02 -- common/autotest_common.sh@10 -- # set +x 00:19:50.255 00:54:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.255 00:54:02 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:50.255 00:54:02 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.255 00:54:02 -- common/autotest_common.sh@1607 -- # read -r file 00:19:50.255 00:54:02 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:50.255 00:54:02 -- common/autotest_common.sh@1606 -- # sort -u 00:19:50.255 00:54:02 -- common/autotest_common.sh@1608 -- # cat 00:19:50.255 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:50.255 [2024-12-03 00:53:59.907089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:50.255 [2024-12-03 00:53:59.907204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92834 ] 00:19:50.255 [2024-12-03 00:54:00.047765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.255 [2024-12-03 00:54:00.116953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.255 [2024-12-03 00:54:01.257603] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 87ba539a-db04-40be-a069-49f89454ec83 already exists 00:19:50.255 [2024-12-03 00:54:01.257653] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:87ba539a-db04-40be-a069-49f89454ec83 alias for bdev NVMe1n1 00:19:50.255 [2024-12-03 00:54:01.257690] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:50.255 Running I/O for 1 seconds... 00:19:50.255 00:19:50.255 Latency(us) 00:19:50.255 [2024-12-03T00:54:02.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.255 [2024-12-03T00:54:02.770Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:50.256 NVMe0n1 : 1.00 21894.81 85.53 0.00 0.00 5832.08 3023.59 10962.39 00:19:50.256 [2024-12-03T00:54:02.771Z] =================================================================================================================== 00:19:50.256 [2024-12-03T00:54:02.771Z] Total : 21894.81 85.53 0.00 0.00 5832.08 3023.59 10962.39 00:19:50.256 Received shutdown signal, test time was about 1.000000 seconds 00:19:50.256 00:19:50.256 Latency(us) 00:19:50.256 [2024-12-03T00:54:02.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.256 [2024-12-03T00:54:02.771Z] =================================================================================================================== 00:19:50.256 [2024-12-03T00:54:02.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.256 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:50.256 00:54:02 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.256 00:54:02 -- common/autotest_common.sh@1607 -- # read -r file 00:19:50.256 00:54:02 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:50.256 00:54:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:50.256 00:54:02 -- nvmf/common.sh@116 -- # sync 00:19:50.515 00:54:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:50.515 00:54:02 -- nvmf/common.sh@119 -- # set +e 00:19:50.515 00:54:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:50.515 00:54:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:50.515 rmmod nvme_tcp 00:19:50.515 rmmod nvme_fabrics 00:19:50.515 rmmod nvme_keyring 00:19:50.515 00:54:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:50.515 00:54:02 -- nvmf/common.sh@123 -- # set -e 00:19:50.515 00:54:02 -- nvmf/common.sh@124 -- # return 0 00:19:50.515 00:54:02 -- nvmf/common.sh@477 -- # '[' -n 92776 ']' 00:19:50.515 00:54:02 -- nvmf/common.sh@478 -- # killprocess 92776 00:19:50.515 00:54:02 -- common/autotest_common.sh@936 -- # '[' -z 92776 ']' 00:19:50.515 00:54:02 -- common/autotest_common.sh@940 -- # kill -0 92776 00:19:50.515 00:54:02 -- common/autotest_common.sh@941 -- # uname 00:19:50.515 00:54:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.515 00:54:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92776 00:19:50.515 killing process with pid 92776 00:19:50.515 00:54:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.515 00:54:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.515 00:54:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92776' 00:19:50.515 00:54:02 -- common/autotest_common.sh@955 -- # kill 92776 00:19:50.515 00:54:02 -- common/autotest_common.sh@960 -- # wait 92776 00:19:50.774 00:54:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:50.774 00:54:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:50.774 00:54:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:50.774 00:54:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.774 00:54:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:50.774 00:54:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.774 00:54:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.774 00:54:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.774 00:54:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:50.774 00:19:50.774 real 0m5.222s 00:19:50.774 user 0m16.412s 00:19:50.774 sys 0m1.108s 00:19:50.774 00:54:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:50.774 ************************************ 00:19:50.774 END TEST nvmf_multicontroller 00:19:50.774 00:54:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.774 ************************************ 00:19:51.033 00:54:03 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:51.033 00:54:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:51.033 00:54:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.033 00:54:03 -- common/autotest_common.sh@10 -- # set +x 00:19:51.033 ************************************ 00:19:51.033 START TEST nvmf_aer 00:19:51.033 ************************************ 00:19:51.033 00:54:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:51.033 * Looking for test storage... 00:19:51.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:51.033 00:54:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:51.033 00:54:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:51.033 00:54:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:51.033 00:54:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:51.033 00:54:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:51.033 00:54:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:51.033 00:54:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:51.033 00:54:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:51.033 00:54:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:51.033 00:54:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.033 00:54:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:51.033 00:54:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:51.033 00:54:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:51.033 00:54:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:51.033 00:54:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:51.033 00:54:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:51.033 00:54:03 -- scripts/common.sh@344 -- # : 1 00:19:51.033 00:54:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:51.033 00:54:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.033 00:54:03 -- scripts/common.sh@364 -- # decimal 1 00:19:51.033 00:54:03 -- scripts/common.sh@352 -- # local d=1 00:19:51.033 00:54:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.033 00:54:03 -- scripts/common.sh@354 -- # echo 1 00:19:51.033 00:54:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:51.033 00:54:03 -- scripts/common.sh@365 -- # decimal 2 00:19:51.033 00:54:03 -- scripts/common.sh@352 -- # local d=2 00:19:51.033 00:54:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.033 00:54:03 -- scripts/common.sh@354 -- # echo 2 00:19:51.033 00:54:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:51.033 00:54:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:51.033 00:54:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:51.033 00:54:03 -- scripts/common.sh@367 -- # return 0 00:19:51.033 00:54:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.033 00:54:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.033 --rc genhtml_branch_coverage=1 00:19:51.033 --rc genhtml_function_coverage=1 00:19:51.033 --rc genhtml_legend=1 00:19:51.033 --rc geninfo_all_blocks=1 00:19:51.033 --rc geninfo_unexecuted_blocks=1 00:19:51.033 00:19:51.033 ' 00:19:51.033 00:54:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.033 --rc genhtml_branch_coverage=1 00:19:51.033 --rc genhtml_function_coverage=1 00:19:51.033 --rc genhtml_legend=1 00:19:51.033 --rc geninfo_all_blocks=1 00:19:51.033 --rc geninfo_unexecuted_blocks=1 00:19:51.033 00:19:51.033 ' 00:19:51.033 00:54:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.033 --rc genhtml_branch_coverage=1 00:19:51.033 --rc genhtml_function_coverage=1 00:19:51.033 --rc genhtml_legend=1 00:19:51.033 --rc geninfo_all_blocks=1 00:19:51.033 --rc geninfo_unexecuted_blocks=1 00:19:51.033 00:19:51.033 ' 00:19:51.033 00:54:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.033 --rc genhtml_branch_coverage=1 00:19:51.033 --rc genhtml_function_coverage=1 00:19:51.033 --rc genhtml_legend=1 00:19:51.033 --rc geninfo_all_blocks=1 00:19:51.033 --rc geninfo_unexecuted_blocks=1 00:19:51.033 00:19:51.033 ' 00:19:51.033 00:54:03 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.033 00:54:03 -- nvmf/common.sh@7 -- # uname -s 00:19:51.033 00:54:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.033 00:54:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.033 00:54:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.033 00:54:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.033 00:54:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.034 00:54:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.034 00:54:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.034 00:54:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.034 00:54:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.034 00:54:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.034 00:54:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:51.034 00:54:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:51.034 00:54:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.034 00:54:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.034 00:54:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.034 00:54:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.034 00:54:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.034 00:54:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.034 00:54:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.034 00:54:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.034 00:54:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.034 00:54:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.034 00:54:03 -- paths/export.sh@5 -- # export PATH 00:19:51.034 00:54:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.034 00:54:03 -- nvmf/common.sh@46 -- # : 0 00:19:51.034 00:54:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:51.034 00:54:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:51.034 00:54:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:51.034 00:54:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.034 00:54:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.034 00:54:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:51.034 00:54:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:51.034 00:54:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:51.034 00:54:03 -- host/aer.sh@11 -- # nvmftestinit 00:19:51.034 00:54:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:51.034 00:54:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.034 00:54:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:51.034 00:54:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:51.034 00:54:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:51.034 00:54:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.034 00:54:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.034 00:54:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.034 00:54:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:51.034 00:54:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:51.034 00:54:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:51.034 00:54:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:51.034 00:54:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:51.034 00:54:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:51.034 00:54:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.034 00:54:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.034 00:54:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:51.034 00:54:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:51.034 00:54:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.034 00:54:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.034 00:54:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.034 00:54:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.034 00:54:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.034 00:54:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.034 00:54:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.034 00:54:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.034 00:54:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:51.034 00:54:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:51.034 Cannot find device "nvmf_tgt_br" 00:19:51.034 00:54:03 -- nvmf/common.sh@154 -- # true 00:19:51.034 00:54:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.292 Cannot find device "nvmf_tgt_br2" 00:19:51.292 00:54:03 -- nvmf/common.sh@155 -- # true 00:19:51.292 00:54:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:51.292 00:54:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:51.292 Cannot find device "nvmf_tgt_br" 00:19:51.292 00:54:03 -- nvmf/common.sh@157 -- # true 00:19:51.292 00:54:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:51.292 Cannot find device "nvmf_tgt_br2" 00:19:51.292 00:54:03 -- nvmf/common.sh@158 -- # true 00:19:51.292 00:54:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:51.293 00:54:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:51.293 00:54:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.293 00:54:03 -- nvmf/common.sh@161 -- # true 00:19:51.293 00:54:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.293 00:54:03 -- nvmf/common.sh@162 -- # true 00:19:51.293 00:54:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.293 00:54:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.293 00:54:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.293 00:54:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.293 00:54:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.293 00:54:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.293 00:54:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.293 00:54:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:51.293 00:54:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:51.293 00:54:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:51.293 00:54:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:51.293 00:54:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:51.293 00:54:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:51.293 00:54:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.293 00:54:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.293 00:54:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.293 00:54:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:51.293 00:54:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:51.293 00:54:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.293 00:54:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.293 00:54:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.293 00:54:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.293 00:54:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.293 00:54:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:51.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:51.293 00:19:51.293 --- 10.0.0.2 ping statistics --- 00:19:51.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.293 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:51.293 00:54:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:51.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.293 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:19:51.293 00:19:51.293 --- 10.0.0.3 ping statistics --- 00:19:51.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.293 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:51.293 00:54:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:51.293 00:19:51.293 --- 10.0.0.1 ping statistics --- 00:19:51.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.293 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:51.293 00:54:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.293 00:54:03 -- nvmf/common.sh@421 -- # return 0 00:19:51.293 00:54:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:51.293 00:54:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.293 00:54:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:51.293 00:54:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:51.293 00:54:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.293 00:54:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:51.293 00:54:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:51.552 00:54:03 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:51.552 00:54:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:51.552 00:54:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.552 00:54:03 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 00:54:03 -- nvmf/common.sh@469 -- # nvmfpid=93085 00:19:51.552 00:54:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:51.552 00:54:03 -- nvmf/common.sh@470 -- # waitforlisten 93085 00:19:51.552 00:54:03 -- common/autotest_common.sh@829 -- # '[' -z 93085 ']' 00:19:51.552 00:54:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.552 00:54:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.552 00:54:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.552 00:54:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.552 00:54:03 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 [2024-12-03 00:54:03.871203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:51.552 [2024-12-03 00:54:03.871277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.552 [2024-12-03 00:54:04.001765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.811 [2024-12-03 00:54:04.082120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:51.811 [2024-12-03 00:54:04.082284] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.811 [2024-12-03 00:54:04.082314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.811 [2024-12-03 00:54:04.082339] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.811 [2024-12-03 00:54:04.082509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.811 [2024-12-03 00:54:04.082607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.811 [2024-12-03 00:54:04.083162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.811 [2024-12-03 00:54:04.083206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.379 00:54:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.379 00:54:04 -- common/autotest_common.sh@862 -- # return 0 00:19:52.379 00:54:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:52.379 00:54:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.379 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 00:54:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.379 00:54:04 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.379 00:54:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.379 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 [2024-12-03 00:54:04.889638] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.638 00:54:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.638 00:54:04 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:52.638 00:54:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.638 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 Malloc0 00:19:52.638 00:54:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.638 00:54:04 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:52.638 00:54:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.638 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 00:54:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.638 00:54:04 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.638 00:54:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.638 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 00:54:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.638 00:54:04 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.638 00:54:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.638 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 [2024-12-03 00:54:04.961721] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.638 00:54:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.638 00:54:04 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:52.638 00:54:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.638 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 [2024-12-03 00:54:04.969492] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:52.638 [ 00:19:52.638 { 00:19:52.638 "allow_any_host": true, 00:19:52.638 "hosts": [], 00:19:52.638 "listen_addresses": [], 00:19:52.638 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:52.638 "subtype": "Discovery" 00:19:52.638 }, 00:19:52.638 { 00:19:52.638 "allow_any_host": true, 00:19:52.638 "hosts": [], 00:19:52.638 "listen_addresses": [ 00:19:52.638 { 00:19:52.638 "adrfam": "IPv4", 00:19:52.638 "traddr": "10.0.0.2", 00:19:52.638 "transport": "TCP", 00:19:52.638 "trsvcid": "4420", 00:19:52.638 "trtype": "TCP" 00:19:52.638 } 00:19:52.638 ], 00:19:52.638 "max_cntlid": 65519, 00:19:52.638 "max_namespaces": 2, 00:19:52.638 "min_cntlid": 1, 00:19:52.638 "model_number": "SPDK bdev Controller", 00:19:52.638 "namespaces": [ 00:19:52.638 { 00:19:52.638 "bdev_name": "Malloc0", 00:19:52.638 "name": "Malloc0", 00:19:52.638 "nguid": "937A2FA7537840149713ED63C201297D", 00:19:52.638 "nsid": 1, 00:19:52.638 "uuid": "937a2fa7-5378-4014-9713-ed63c201297d" 00:19:52.638 } 00:19:52.638 ], 00:19:52.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.638 "serial_number": "SPDK00000000000001", 00:19:52.638 "subtype": "NVMe" 00:19:52.638 } 00:19:52.638 ] 00:19:52.638 00:54:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.638 00:54:04 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:52.638 00:54:04 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:52.638 00:54:04 -- host/aer.sh@33 -- # aerpid=93145 00:19:52.638 00:54:04 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:52.638 00:54:04 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:52.638 00:54:04 -- common/autotest_common.sh@1254 -- # local i=0 00:19:52.638 00:54:04 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.638 00:54:04 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:19:52.638 00:54:04 -- common/autotest_common.sh@1257 -- # i=1 00:19:52.638 00:54:04 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:52.638 00:54:05 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.639 00:54:05 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:19:52.639 00:54:05 -- common/autotest_common.sh@1257 -- # i=2 00:19:52.639 00:54:05 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:52.898 00:54:05 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.898 00:54:05 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.898 00:54:05 -- common/autotest_common.sh@1265 -- # return 0 00:19:52.898 00:54:05 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:52.898 00:54:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.898 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:52.898 Malloc1 00:19:52.898 00:54:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.898 00:54:05 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:52.898 00:54:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.898 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:52.898 00:54:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.898 00:54:05 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:52.898 00:54:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.898 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:52.898 Asynchronous Event Request test 00:19:52.898 Attaching to 10.0.0.2 00:19:52.898 Attached to 10.0.0.2 00:19:52.898 Registering asynchronous event callbacks... 00:19:52.898 Starting namespace attribute notice tests for all controllers... 00:19:52.898 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:52.898 aer_cb - Changed Namespace 00:19:52.898 Cleaning up... 00:19:52.898 [ 00:19:52.898 { 00:19:52.898 "allow_any_host": true, 00:19:52.898 "hosts": [], 00:19:52.898 "listen_addresses": [], 00:19:52.898 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:52.898 "subtype": "Discovery" 00:19:52.898 }, 00:19:52.898 { 00:19:52.898 "allow_any_host": true, 00:19:52.898 "hosts": [], 00:19:52.898 "listen_addresses": [ 00:19:52.898 { 00:19:52.898 "adrfam": "IPv4", 00:19:52.898 "traddr": "10.0.0.2", 00:19:52.898 "transport": "TCP", 00:19:52.898 "trsvcid": "4420", 00:19:52.898 "trtype": "TCP" 00:19:52.898 } 00:19:52.898 ], 00:19:52.898 "max_cntlid": 65519, 00:19:52.898 "max_namespaces": 2, 00:19:52.898 "min_cntlid": 1, 00:19:52.898 "model_number": "SPDK bdev Controller", 00:19:52.898 "namespaces": [ 00:19:52.898 { 00:19:52.898 "bdev_name": "Malloc0", 00:19:52.898 "name": "Malloc0", 00:19:52.898 "nguid": "937A2FA7537840149713ED63C201297D", 00:19:52.898 "nsid": 1, 00:19:52.898 "uuid": "937a2fa7-5378-4014-9713-ed63c201297d" 00:19:52.898 }, 00:19:52.898 { 00:19:52.898 "bdev_name": "Malloc1", 00:19:52.898 "name": "Malloc1", 00:19:52.898 "nguid": "99371ADB32CE4603907D3F317F434189", 00:19:52.898 "nsid": 2, 00:19:52.898 "uuid": "99371adb-32ce-4603-907d-3f317f434189" 00:19:52.898 } 00:19:52.898 ], 00:19:52.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.898 "serial_number": "SPDK00000000000001", 00:19:52.898 "subtype": "NVMe" 00:19:52.898 } 00:19:52.898 ] 00:19:52.898 00:54:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.898 00:54:05 -- host/aer.sh@43 -- # wait 93145 00:19:52.898 00:54:05 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:52.898 00:54:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.898 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:52.898 00:54:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.898 00:54:05 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:52.898 00:54:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.898 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:52.898 00:54:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.898 00:54:05 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.898 00:54:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.898 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:52.898 00:54:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.898 00:54:05 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:52.898 00:54:05 -- host/aer.sh@51 -- # nvmftestfini 00:19:52.898 00:54:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.898 00:54:05 -- nvmf/common.sh@116 -- # sync 00:19:53.157 00:54:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:53.157 00:54:05 -- nvmf/common.sh@119 -- # set +e 00:19:53.157 00:54:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:53.157 00:54:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:53.157 rmmod nvme_tcp 00:19:53.157 rmmod nvme_fabrics 00:19:53.157 rmmod nvme_keyring 00:19:53.157 00:54:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:53.157 00:54:05 -- nvmf/common.sh@123 -- # set -e 00:19:53.157 00:54:05 -- nvmf/common.sh@124 -- # return 0 00:19:53.157 00:54:05 -- nvmf/common.sh@477 -- # '[' -n 93085 ']' 00:19:53.157 00:54:05 -- nvmf/common.sh@478 -- # killprocess 93085 00:19:53.157 00:54:05 -- common/autotest_common.sh@936 -- # '[' -z 93085 ']' 00:19:53.157 00:54:05 -- common/autotest_common.sh@940 -- # kill -0 93085 00:19:53.157 00:54:05 -- common/autotest_common.sh@941 -- # uname 00:19:53.157 00:54:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.157 00:54:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93085 00:19:53.157 00:54:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:53.157 killing process with pid 93085 00:19:53.157 00:54:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:53.157 00:54:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93085' 00:19:53.157 00:54:05 -- common/autotest_common.sh@955 -- # kill 93085 00:19:53.157 [2024-12-03 00:54:05.501976] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:53.157 00:54:05 -- common/autotest_common.sh@960 -- # wait 93085 00:19:53.415 00:54:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.415 00:54:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:53.415 00:54:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:53.415 00:54:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.415 00:54:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:53.415 00:54:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.415 00:54:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.415 00:54:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.416 00:54:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:53.416 00:19:53.416 real 0m2.452s 00:19:53.416 user 0m6.723s 00:19:53.416 sys 0m0.686s 00:19:53.416 00:54:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:53.416 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.416 ************************************ 00:19:53.416 END TEST nvmf_aer 00:19:53.416 ************************************ 00:19:53.416 00:54:05 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.416 00:54:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:53.416 00:54:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.416 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.416 ************************************ 00:19:53.416 START TEST nvmf_async_init 00:19:53.416 ************************************ 00:19:53.416 00:54:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.416 * Looking for test storage... 00:19:53.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:53.416 00:54:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:53.416 00:54:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:53.416 00:54:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:53.675 00:54:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:53.675 00:54:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:53.675 00:54:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:53.675 00:54:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:53.675 00:54:05 -- scripts/common.sh@335 -- # IFS=.-: 00:19:53.675 00:54:05 -- scripts/common.sh@335 -- # read -ra ver1 00:19:53.675 00:54:05 -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.675 00:54:05 -- scripts/common.sh@336 -- # read -ra ver2 00:19:53.675 00:54:05 -- scripts/common.sh@337 -- # local 'op=<' 00:19:53.675 00:54:05 -- scripts/common.sh@339 -- # ver1_l=2 00:19:53.675 00:54:05 -- scripts/common.sh@340 -- # ver2_l=1 00:19:53.675 00:54:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:53.675 00:54:05 -- scripts/common.sh@343 -- # case "$op" in 00:19:53.675 00:54:05 -- scripts/common.sh@344 -- # : 1 00:19:53.675 00:54:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:53.675 00:54:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.675 00:54:05 -- scripts/common.sh@364 -- # decimal 1 00:19:53.675 00:54:05 -- scripts/common.sh@352 -- # local d=1 00:19:53.675 00:54:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.675 00:54:05 -- scripts/common.sh@354 -- # echo 1 00:19:53.675 00:54:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:53.675 00:54:05 -- scripts/common.sh@365 -- # decimal 2 00:19:53.675 00:54:05 -- scripts/common.sh@352 -- # local d=2 00:19:53.675 00:54:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.675 00:54:05 -- scripts/common.sh@354 -- # echo 2 00:19:53.675 00:54:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:53.675 00:54:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:53.675 00:54:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:53.675 00:54:05 -- scripts/common.sh@367 -- # return 0 00:19:53.675 00:54:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.675 00:54:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.675 --rc genhtml_branch_coverage=1 00:19:53.675 --rc genhtml_function_coverage=1 00:19:53.675 --rc genhtml_legend=1 00:19:53.675 --rc geninfo_all_blocks=1 00:19:53.675 --rc geninfo_unexecuted_blocks=1 00:19:53.675 00:19:53.675 ' 00:19:53.675 00:54:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.675 --rc genhtml_branch_coverage=1 00:19:53.675 --rc genhtml_function_coverage=1 00:19:53.675 --rc genhtml_legend=1 00:19:53.675 --rc geninfo_all_blocks=1 00:19:53.675 --rc geninfo_unexecuted_blocks=1 00:19:53.675 00:19:53.675 ' 00:19:53.675 00:54:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.675 --rc genhtml_branch_coverage=1 00:19:53.675 --rc genhtml_function_coverage=1 00:19:53.675 --rc genhtml_legend=1 00:19:53.675 --rc geninfo_all_blocks=1 00:19:53.675 --rc geninfo_unexecuted_blocks=1 00:19:53.675 00:19:53.675 ' 00:19:53.675 00:54:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.675 --rc genhtml_branch_coverage=1 00:19:53.675 --rc genhtml_function_coverage=1 00:19:53.675 --rc genhtml_legend=1 00:19:53.675 --rc geninfo_all_blocks=1 00:19:53.675 --rc geninfo_unexecuted_blocks=1 00:19:53.675 00:19:53.675 ' 00:19:53.675 00:54:05 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.675 00:54:05 -- nvmf/common.sh@7 -- # uname -s 00:19:53.675 00:54:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.675 00:54:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.675 00:54:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.675 00:54:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.675 00:54:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.675 00:54:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.675 00:54:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.675 00:54:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.675 00:54:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.676 00:54:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.676 00:54:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:53.676 00:54:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:53.676 00:54:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.676 00:54:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.676 00:54:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.676 00:54:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.676 00:54:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.676 00:54:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.676 00:54:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.676 00:54:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.676 00:54:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.676 00:54:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.676 00:54:06 -- paths/export.sh@5 -- # export PATH 00:19:53.676 00:54:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.676 00:54:06 -- nvmf/common.sh@46 -- # : 0 00:19:53.676 00:54:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.676 00:54:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.676 00:54:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.676 00:54:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.676 00:54:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.676 00:54:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.676 00:54:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.676 00:54:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.676 00:54:06 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:53.676 00:54:06 -- host/async_init.sh@14 -- # null_block_size=512 00:19:53.676 00:54:06 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:53.676 00:54:06 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:53.676 00:54:06 -- host/async_init.sh@20 -- # uuidgen 00:19:53.676 00:54:06 -- host/async_init.sh@20 -- # tr -d - 00:19:53.676 00:54:06 -- host/async_init.sh@20 -- # nguid=8c13586817c543bb8c7622a89d82b3ee 00:19:53.676 00:54:06 -- host/async_init.sh@22 -- # nvmftestinit 00:19:53.676 00:54:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:53.676 00:54:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.676 00:54:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.676 00:54:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.676 00:54:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.676 00:54:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.676 00:54:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.676 00:54:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.676 00:54:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:53.676 00:54:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:53.676 00:54:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:53.676 00:54:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:53.676 00:54:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:53.676 00:54:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:53.676 00:54:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.676 00:54:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.676 00:54:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.676 00:54:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:53.676 00:54:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.676 00:54:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.676 00:54:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.676 00:54:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.676 00:54:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.676 00:54:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.676 00:54:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.676 00:54:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.676 00:54:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:53.676 00:54:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:53.676 Cannot find device "nvmf_tgt_br" 00:19:53.676 00:54:06 -- nvmf/common.sh@154 -- # true 00:19:53.676 00:54:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.676 Cannot find device "nvmf_tgt_br2" 00:19:53.676 00:54:06 -- nvmf/common.sh@155 -- # true 00:19:53.676 00:54:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:53.676 00:54:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:53.676 Cannot find device "nvmf_tgt_br" 00:19:53.676 00:54:06 -- nvmf/common.sh@157 -- # true 00:19:53.676 00:54:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:53.676 Cannot find device "nvmf_tgt_br2" 00:19:53.676 00:54:06 -- nvmf/common.sh@158 -- # true 00:19:53.676 00:54:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:53.676 00:54:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:53.676 00:54:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:53.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.676 00:54:06 -- nvmf/common.sh@161 -- # true 00:19:53.676 00:54:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:53.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.676 00:54:06 -- nvmf/common.sh@162 -- # true 00:19:53.676 00:54:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:53.676 00:54:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:53.676 00:54:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:53.676 00:54:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:53.935 00:54:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:53.935 00:54:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:53.935 00:54:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:53.935 00:54:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:53.935 00:54:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:53.935 00:54:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:53.935 00:54:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:53.935 00:54:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:53.935 00:54:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:53.935 00:54:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:53.935 00:54:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:53.935 00:54:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:53.935 00:54:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:53.935 00:54:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:53.935 00:54:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:53.935 00:54:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:53.935 00:54:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:53.935 00:54:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:53.935 00:54:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:53.935 00:54:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:53.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:53.935 00:19:53.935 --- 10.0.0.2 ping statistics --- 00:19:53.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.935 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:53.935 00:54:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:53.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:53.935 00:19:53.935 --- 10.0.0.3 ping statistics --- 00:19:53.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.935 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:53.935 00:54:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:53.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:19:53.935 00:19:53.935 --- 10.0.0.1 ping statistics --- 00:19:53.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.935 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:53.935 00:54:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.935 00:54:06 -- nvmf/common.sh@421 -- # return 0 00:19:53.935 00:54:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:53.935 00:54:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.935 00:54:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:53.935 00:54:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:53.935 00:54:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.935 00:54:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:53.935 00:54:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:53.935 00:54:06 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:53.935 00:54:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:53.935 00:54:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.935 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:19:53.935 00:54:06 -- nvmf/common.sh@469 -- # nvmfpid=93325 00:19:53.935 00:54:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:53.935 00:54:06 -- nvmf/common.sh@470 -- # waitforlisten 93325 00:19:53.935 00:54:06 -- common/autotest_common.sh@829 -- # '[' -z 93325 ']' 00:19:53.935 00:54:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.935 00:54:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.935 00:54:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.935 00:54:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.935 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:19:54.194 [2024-12-03 00:54:06.469502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:54.194 [2024-12-03 00:54:06.469598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.194 [2024-12-03 00:54:06.612569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.194 [2024-12-03 00:54:06.683221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.194 [2024-12-03 00:54:06.683424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.194 [2024-12-03 00:54:06.683441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.194 [2024-12-03 00:54:06.683453] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.194 [2024-12-03 00:54:06.683491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.127 00:54:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.127 00:54:07 -- common/autotest_common.sh@862 -- # return 0 00:19:55.127 00:54:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:55.127 00:54:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.127 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.127 00:54:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.127 00:54:07 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 [2024-12-03 00:54:07.536131] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.128 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.128 00:54:07 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 null0 00:19:55.128 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.128 00:54:07 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.128 00:54:07 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.128 00:54:07 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8c13586817c543bb8c7622a89d82b3ee 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.128 00:54:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 [2024-12-03 00:54:07.576245] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.128 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.128 00:54:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:55.128 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.128 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.386 nvme0n1 00:19:55.386 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.386 00:54:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:55.386 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.386 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.386 [ 00:19:55.386 { 00:19:55.386 "aliases": [ 00:19:55.386 "8c135868-17c5-43bb-8c76-22a89d82b3ee" 00:19:55.386 ], 00:19:55.386 "assigned_rate_limits": { 00:19:55.386 "r_mbytes_per_sec": 0, 00:19:55.386 "rw_ios_per_sec": 0, 00:19:55.386 "rw_mbytes_per_sec": 0, 00:19:55.386 "w_mbytes_per_sec": 0 00:19:55.386 }, 00:19:55.386 "block_size": 512, 00:19:55.386 "claimed": false, 00:19:55.386 "driver_specific": { 00:19:55.386 "mp_policy": "active_passive", 00:19:55.386 "nvme": [ 00:19:55.386 { 00:19:55.386 "ctrlr_data": { 00:19:55.386 "ana_reporting": false, 00:19:55.386 "cntlid": 1, 00:19:55.386 "firmware_revision": "24.01.1", 00:19:55.386 "model_number": "SPDK bdev Controller", 00:19:55.386 "multi_ctrlr": true, 00:19:55.386 "oacs": { 00:19:55.386 "firmware": 0, 00:19:55.386 "format": 0, 00:19:55.386 "ns_manage": 0, 00:19:55.386 "security": 0 00:19:55.386 }, 00:19:55.386 "serial_number": "00000000000000000000", 00:19:55.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.386 "vendor_id": "0x8086" 00:19:55.386 }, 00:19:55.386 "ns_data": { 00:19:55.386 "can_share": true, 00:19:55.386 "id": 1 00:19:55.386 }, 00:19:55.386 "trid": { 00:19:55.386 "adrfam": "IPv4", 00:19:55.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.386 "traddr": "10.0.0.2", 00:19:55.386 "trsvcid": "4420", 00:19:55.386 "trtype": "TCP" 00:19:55.386 }, 00:19:55.386 "vs": { 00:19:55.386 "nvme_version": "1.3" 00:19:55.386 } 00:19:55.386 } 00:19:55.386 ] 00:19:55.386 }, 00:19:55.386 "name": "nvme0n1", 00:19:55.386 "num_blocks": 2097152, 00:19:55.386 "product_name": "NVMe disk", 00:19:55.386 "supported_io_types": { 00:19:55.386 "abort": true, 00:19:55.386 "compare": true, 00:19:55.386 "compare_and_write": true, 00:19:55.386 "flush": true, 00:19:55.386 "nvme_admin": true, 00:19:55.386 "nvme_io": true, 00:19:55.386 "read": true, 00:19:55.386 "reset": true, 00:19:55.386 "unmap": false, 00:19:55.386 "write": true, 00:19:55.386 "write_zeroes": true 00:19:55.386 }, 00:19:55.386 "uuid": "8c135868-17c5-43bb-8c76-22a89d82b3ee", 00:19:55.386 "zoned": false 00:19:55.386 } 00:19:55.386 ] 00:19:55.386 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.386 00:54:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:55.386 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.386 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.386 [2024-12-03 00:54:07.840239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:55.386 [2024-12-03 00:54:07.840352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba5a00 (9): Bad file descriptor 00:19:55.678 [2024-12-03 00:54:07.972523] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:55.678 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.678 00:54:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:55.679 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 [ 00:19:55.679 { 00:19:55.679 "aliases": [ 00:19:55.679 "8c135868-17c5-43bb-8c76-22a89d82b3ee" 00:19:55.679 ], 00:19:55.679 "assigned_rate_limits": { 00:19:55.679 "r_mbytes_per_sec": 0, 00:19:55.679 "rw_ios_per_sec": 0, 00:19:55.679 "rw_mbytes_per_sec": 0, 00:19:55.679 "w_mbytes_per_sec": 0 00:19:55.679 }, 00:19:55.679 "block_size": 512, 00:19:55.679 "claimed": false, 00:19:55.679 "driver_specific": { 00:19:55.679 "mp_policy": "active_passive", 00:19:55.679 "nvme": [ 00:19:55.679 { 00:19:55.679 "ctrlr_data": { 00:19:55.679 "ana_reporting": false, 00:19:55.679 "cntlid": 2, 00:19:55.679 "firmware_revision": "24.01.1", 00:19:55.679 "model_number": "SPDK bdev Controller", 00:19:55.679 "multi_ctrlr": true, 00:19:55.679 "oacs": { 00:19:55.679 "firmware": 0, 00:19:55.679 "format": 0, 00:19:55.679 "ns_manage": 0, 00:19:55.679 "security": 0 00:19:55.679 }, 00:19:55.679 "serial_number": "00000000000000000000", 00:19:55.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.679 "vendor_id": "0x8086" 00:19:55.679 }, 00:19:55.679 "ns_data": { 00:19:55.679 "can_share": true, 00:19:55.679 "id": 1 00:19:55.679 }, 00:19:55.679 "trid": { 00:19:55.679 "adrfam": "IPv4", 00:19:55.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.679 "traddr": "10.0.0.2", 00:19:55.679 "trsvcid": "4420", 00:19:55.679 "trtype": "TCP" 00:19:55.679 }, 00:19:55.679 "vs": { 00:19:55.679 "nvme_version": "1.3" 00:19:55.679 } 00:19:55.679 } 00:19:55.679 ] 00:19:55.679 }, 00:19:55.679 "name": "nvme0n1", 00:19:55.679 "num_blocks": 2097152, 00:19:55.679 "product_name": "NVMe disk", 00:19:55.679 "supported_io_types": { 00:19:55.679 "abort": true, 00:19:55.679 "compare": true, 00:19:55.679 "compare_and_write": true, 00:19:55.679 "flush": true, 00:19:55.679 "nvme_admin": true, 00:19:55.679 "nvme_io": true, 00:19:55.679 "read": true, 00:19:55.679 "reset": true, 00:19:55.679 "unmap": false, 00:19:55.679 "write": true, 00:19:55.679 "write_zeroes": true 00:19:55.679 }, 00:19:55.679 "uuid": "8c135868-17c5-43bb-8c76-22a89d82b3ee", 00:19:55.679 "zoned": false 00:19:55.679 } 00:19:55.679 ] 00:19:55.679 00:54:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.679 00:54:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@53 -- # mktemp 00:19:55.679 00:54:08 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oGmTSACWNS 00:19:55.679 00:54:08 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.679 00:54:08 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oGmTSACWNS 00:19:55.679 00:54:08 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:55.679 00:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 [2024-12-03 00:54:08.032385] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.679 [2024-12-03 00:54:08.032581] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oGmTSACWNS 00:19:55.679 00:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oGmTSACWNS 00:19:55.679 00:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 [2024-12-03 00:54:08.048379] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.679 nvme0n1 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:55.679 00:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 [ 00:19:55.679 { 00:19:55.679 "aliases": [ 00:19:55.679 "8c135868-17c5-43bb-8c76-22a89d82b3ee" 00:19:55.679 ], 00:19:55.679 "assigned_rate_limits": { 00:19:55.679 "r_mbytes_per_sec": 0, 00:19:55.679 "rw_ios_per_sec": 0, 00:19:55.679 "rw_mbytes_per_sec": 0, 00:19:55.679 "w_mbytes_per_sec": 0 00:19:55.679 }, 00:19:55.679 "block_size": 512, 00:19:55.679 "claimed": false, 00:19:55.679 "driver_specific": { 00:19:55.679 "mp_policy": "active_passive", 00:19:55.679 "nvme": [ 00:19:55.679 { 00:19:55.679 "ctrlr_data": { 00:19:55.679 "ana_reporting": false, 00:19:55.679 "cntlid": 3, 00:19:55.679 "firmware_revision": "24.01.1", 00:19:55.679 "model_number": "SPDK bdev Controller", 00:19:55.679 "multi_ctrlr": true, 00:19:55.679 "oacs": { 00:19:55.679 "firmware": 0, 00:19:55.679 "format": 0, 00:19:55.679 "ns_manage": 0, 00:19:55.679 "security": 0 00:19:55.679 }, 00:19:55.679 "serial_number": "00000000000000000000", 00:19:55.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.679 "vendor_id": "0x8086" 00:19:55.679 }, 00:19:55.679 "ns_data": { 00:19:55.679 "can_share": true, 00:19:55.679 "id": 1 00:19:55.679 }, 00:19:55.679 "trid": { 00:19:55.679 "adrfam": "IPv4", 00:19:55.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.679 "traddr": "10.0.0.2", 00:19:55.679 "trsvcid": "4421", 00:19:55.679 "trtype": "TCP" 00:19:55.679 }, 00:19:55.679 "vs": { 00:19:55.679 "nvme_version": "1.3" 00:19:55.679 } 00:19:55.679 } 00:19:55.679 ] 00:19:55.679 }, 00:19:55.679 "name": "nvme0n1", 00:19:55.679 "num_blocks": 2097152, 00:19:55.679 "product_name": "NVMe disk", 00:19:55.679 "supported_io_types": { 00:19:55.679 "abort": true, 00:19:55.679 "compare": true, 00:19:55.679 "compare_and_write": true, 00:19:55.679 "flush": true, 00:19:55.679 "nvme_admin": true, 00:19:55.679 "nvme_io": true, 00:19:55.679 "read": true, 00:19:55.679 "reset": true, 00:19:55.679 "unmap": false, 00:19:55.679 "write": true, 00:19:55.679 "write_zeroes": true 00:19:55.679 }, 00:19:55.679 "uuid": "8c135868-17c5-43bb-8c76-22a89d82b3ee", 00:19:55.679 "zoned": false 00:19:55.679 } 00:19:55.679 ] 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.679 00:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.679 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.679 00:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.679 00:54:08 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.oGmTSACWNS 00:19:55.679 00:54:08 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:55.679 00:54:08 -- host/async_init.sh@78 -- # nvmftestfini 00:19:55.679 00:54:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.679 00:54:08 -- nvmf/common.sh@116 -- # sync 00:19:55.938 00:54:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.938 00:54:08 -- nvmf/common.sh@119 -- # set +e 00:19:55.938 00:54:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.938 00:54:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.938 rmmod nvme_tcp 00:19:55.938 rmmod nvme_fabrics 00:19:55.938 rmmod nvme_keyring 00:19:55.938 00:54:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.938 00:54:08 -- nvmf/common.sh@123 -- # set -e 00:19:55.938 00:54:08 -- nvmf/common.sh@124 -- # return 0 00:19:55.938 00:54:08 -- nvmf/common.sh@477 -- # '[' -n 93325 ']' 00:19:55.938 00:54:08 -- nvmf/common.sh@478 -- # killprocess 93325 00:19:55.938 00:54:08 -- common/autotest_common.sh@936 -- # '[' -z 93325 ']' 00:19:55.938 00:54:08 -- common/autotest_common.sh@940 -- # kill -0 93325 00:19:55.938 00:54:08 -- common/autotest_common.sh@941 -- # uname 00:19:55.938 00:54:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.938 00:54:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93325 00:19:55.938 00:54:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:55.938 00:54:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:55.938 killing process with pid 93325 00:19:55.938 00:54:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93325' 00:19:55.938 00:54:08 -- common/autotest_common.sh@955 -- # kill 93325 00:19:55.938 00:54:08 -- common/autotest_common.sh@960 -- # wait 93325 00:19:56.197 00:54:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.197 00:54:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.197 00:54:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.197 00:54:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.197 00:54:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.197 00:54:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.197 00:54:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.197 00:54:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.197 00:54:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:56.197 00:19:56.197 real 0m2.718s 00:19:56.197 user 0m2.538s 00:19:56.197 sys 0m0.668s 00:19:56.197 00:54:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.197 ************************************ 00:19:56.197 END TEST nvmf_async_init 00:19:56.197 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:56.197 ************************************ 00:19:56.197 00:54:08 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:56.197 00:54:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.197 00:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.197 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:56.197 ************************************ 00:19:56.197 START TEST dma 00:19:56.197 ************************************ 00:19:56.197 00:54:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:56.197 * Looking for test storage... 00:19:56.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.197 00:54:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:56.197 00:54:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:56.197 00:54:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:56.456 00:54:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:56.456 00:54:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:56.456 00:54:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:56.456 00:54:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:56.456 00:54:08 -- scripts/common.sh@335 -- # IFS=.-: 00:19:56.456 00:54:08 -- scripts/common.sh@335 -- # read -ra ver1 00:19:56.456 00:54:08 -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.456 00:54:08 -- scripts/common.sh@336 -- # read -ra ver2 00:19:56.456 00:54:08 -- scripts/common.sh@337 -- # local 'op=<' 00:19:56.456 00:54:08 -- scripts/common.sh@339 -- # ver1_l=2 00:19:56.456 00:54:08 -- scripts/common.sh@340 -- # ver2_l=1 00:19:56.456 00:54:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:56.456 00:54:08 -- scripts/common.sh@343 -- # case "$op" in 00:19:56.456 00:54:08 -- scripts/common.sh@344 -- # : 1 00:19:56.456 00:54:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:56.456 00:54:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.456 00:54:08 -- scripts/common.sh@364 -- # decimal 1 00:19:56.456 00:54:08 -- scripts/common.sh@352 -- # local d=1 00:19:56.456 00:54:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.456 00:54:08 -- scripts/common.sh@354 -- # echo 1 00:19:56.456 00:54:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:56.456 00:54:08 -- scripts/common.sh@365 -- # decimal 2 00:19:56.456 00:54:08 -- scripts/common.sh@352 -- # local d=2 00:19:56.456 00:54:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.456 00:54:08 -- scripts/common.sh@354 -- # echo 2 00:19:56.456 00:54:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:56.456 00:54:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:56.456 00:54:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:56.456 00:54:08 -- scripts/common.sh@367 -- # return 0 00:19:56.456 00:54:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.456 00:54:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.456 --rc genhtml_branch_coverage=1 00:19:56.456 --rc genhtml_function_coverage=1 00:19:56.456 --rc genhtml_legend=1 00:19:56.456 --rc geninfo_all_blocks=1 00:19:56.456 --rc geninfo_unexecuted_blocks=1 00:19:56.456 00:19:56.456 ' 00:19:56.456 00:54:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.456 --rc genhtml_branch_coverage=1 00:19:56.456 --rc genhtml_function_coverage=1 00:19:56.456 --rc genhtml_legend=1 00:19:56.456 --rc geninfo_all_blocks=1 00:19:56.456 --rc geninfo_unexecuted_blocks=1 00:19:56.456 00:19:56.456 ' 00:19:56.456 00:54:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.456 --rc genhtml_branch_coverage=1 00:19:56.456 --rc genhtml_function_coverage=1 00:19:56.456 --rc genhtml_legend=1 00:19:56.456 --rc geninfo_all_blocks=1 00:19:56.456 --rc geninfo_unexecuted_blocks=1 00:19:56.456 00:19:56.456 ' 00:19:56.456 00:54:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.456 --rc genhtml_branch_coverage=1 00:19:56.456 --rc genhtml_function_coverage=1 00:19:56.456 --rc genhtml_legend=1 00:19:56.456 --rc geninfo_all_blocks=1 00:19:56.456 --rc geninfo_unexecuted_blocks=1 00:19:56.456 00:19:56.456 ' 00:19:56.456 00:54:08 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.456 00:54:08 -- nvmf/common.sh@7 -- # uname -s 00:19:56.456 00:54:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.456 00:54:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.456 00:54:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.456 00:54:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.456 00:54:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.456 00:54:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.456 00:54:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.456 00:54:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.456 00:54:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.456 00:54:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.456 00:54:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:56.456 00:54:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:56.456 00:54:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.456 00:54:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.456 00:54:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.456 00:54:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.456 00:54:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.456 00:54:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.456 00:54:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.456 00:54:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.456 00:54:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.456 00:54:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.456 00:54:08 -- paths/export.sh@5 -- # export PATH 00:19:56.456 00:54:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.456 00:54:08 -- nvmf/common.sh@46 -- # : 0 00:19:56.456 00:54:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.456 00:54:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.456 00:54:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.456 00:54:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.456 00:54:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.456 00:54:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.456 00:54:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.456 00:54:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.456 00:54:08 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:56.456 00:54:08 -- host/dma.sh@13 -- # exit 0 00:19:56.456 00:19:56.456 real 0m0.209s 00:19:56.456 user 0m0.132s 00:19:56.456 sys 0m0.088s 00:19:56.456 00:54:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.456 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:56.456 ************************************ 00:19:56.456 END TEST dma 00:19:56.456 ************************************ 00:19:56.456 00:54:08 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.456 00:54:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.456 00:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.456 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:19:56.456 ************************************ 00:19:56.457 START TEST nvmf_identify 00:19:56.457 ************************************ 00:19:56.457 00:54:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.457 * Looking for test storage... 00:19:56.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.457 00:54:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:56.457 00:54:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:56.457 00:54:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:56.715 00:54:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:56.715 00:54:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:56.715 00:54:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:56.715 00:54:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:56.715 00:54:09 -- scripts/common.sh@335 -- # IFS=.-: 00:19:56.715 00:54:09 -- scripts/common.sh@335 -- # read -ra ver1 00:19:56.715 00:54:09 -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.715 00:54:09 -- scripts/common.sh@336 -- # read -ra ver2 00:19:56.715 00:54:09 -- scripts/common.sh@337 -- # local 'op=<' 00:19:56.715 00:54:09 -- scripts/common.sh@339 -- # ver1_l=2 00:19:56.715 00:54:09 -- scripts/common.sh@340 -- # ver2_l=1 00:19:56.715 00:54:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:56.715 00:54:09 -- scripts/common.sh@343 -- # case "$op" in 00:19:56.715 00:54:09 -- scripts/common.sh@344 -- # : 1 00:19:56.715 00:54:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:56.715 00:54:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.715 00:54:09 -- scripts/common.sh@364 -- # decimal 1 00:19:56.715 00:54:09 -- scripts/common.sh@352 -- # local d=1 00:19:56.715 00:54:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.715 00:54:09 -- scripts/common.sh@354 -- # echo 1 00:19:56.715 00:54:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:56.715 00:54:09 -- scripts/common.sh@365 -- # decimal 2 00:19:56.715 00:54:09 -- scripts/common.sh@352 -- # local d=2 00:19:56.715 00:54:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.715 00:54:09 -- scripts/common.sh@354 -- # echo 2 00:19:56.715 00:54:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:56.715 00:54:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:56.715 00:54:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:56.715 00:54:09 -- scripts/common.sh@367 -- # return 0 00:19:56.715 00:54:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.715 00:54:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:56.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.715 --rc genhtml_branch_coverage=1 00:19:56.715 --rc genhtml_function_coverage=1 00:19:56.715 --rc genhtml_legend=1 00:19:56.715 --rc geninfo_all_blocks=1 00:19:56.715 --rc geninfo_unexecuted_blocks=1 00:19:56.715 00:19:56.715 ' 00:19:56.715 00:54:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:56.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.715 --rc genhtml_branch_coverage=1 00:19:56.715 --rc genhtml_function_coverage=1 00:19:56.715 --rc genhtml_legend=1 00:19:56.715 --rc geninfo_all_blocks=1 00:19:56.715 --rc geninfo_unexecuted_blocks=1 00:19:56.715 00:19:56.715 ' 00:19:56.715 00:54:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:56.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.715 --rc genhtml_branch_coverage=1 00:19:56.715 --rc genhtml_function_coverage=1 00:19:56.715 --rc genhtml_legend=1 00:19:56.715 --rc geninfo_all_blocks=1 00:19:56.715 --rc geninfo_unexecuted_blocks=1 00:19:56.715 00:19:56.715 ' 00:19:56.715 00:54:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:56.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.715 --rc genhtml_branch_coverage=1 00:19:56.715 --rc genhtml_function_coverage=1 00:19:56.715 --rc genhtml_legend=1 00:19:56.715 --rc geninfo_all_blocks=1 00:19:56.715 --rc geninfo_unexecuted_blocks=1 00:19:56.715 00:19:56.715 ' 00:19:56.715 00:54:09 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.715 00:54:09 -- nvmf/common.sh@7 -- # uname -s 00:19:56.715 00:54:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.715 00:54:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.715 00:54:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.715 00:54:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.715 00:54:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.715 00:54:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.715 00:54:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.715 00:54:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.715 00:54:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.715 00:54:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.715 00:54:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:56.715 00:54:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:56.715 00:54:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.715 00:54:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.715 00:54:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.715 00:54:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.715 00:54:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.715 00:54:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.715 00:54:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.715 00:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.715 00:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.715 00:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.715 00:54:09 -- paths/export.sh@5 -- # export PATH 00:19:56.715 00:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.715 00:54:09 -- nvmf/common.sh@46 -- # : 0 00:19:56.715 00:54:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.715 00:54:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.715 00:54:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.715 00:54:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.715 00:54:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.715 00:54:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.715 00:54:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.715 00:54:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.715 00:54:09 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.715 00:54:09 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.715 00:54:09 -- host/identify.sh@14 -- # nvmftestinit 00:19:56.715 00:54:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:56.715 00:54:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.715 00:54:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:56.715 00:54:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:56.715 00:54:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:56.715 00:54:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.715 00:54:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.715 00:54:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.716 00:54:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:56.716 00:54:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:56.716 00:54:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:56.716 00:54:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:56.716 00:54:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:56.716 00:54:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:56.716 00:54:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.716 00:54:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.716 00:54:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.716 00:54:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:56.716 00:54:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.716 00:54:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.716 00:54:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.716 00:54:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.716 00:54:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.716 00:54:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.716 00:54:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.716 00:54:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.716 00:54:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:56.716 00:54:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:56.716 Cannot find device "nvmf_tgt_br" 00:19:56.716 00:54:09 -- nvmf/common.sh@154 -- # true 00:19:56.716 00:54:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.716 Cannot find device "nvmf_tgt_br2" 00:19:56.716 00:54:09 -- nvmf/common.sh@155 -- # true 00:19:56.716 00:54:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:56.716 00:54:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:56.716 Cannot find device "nvmf_tgt_br" 00:19:56.716 00:54:09 -- nvmf/common.sh@157 -- # true 00:19:56.716 00:54:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:56.716 Cannot find device "nvmf_tgt_br2" 00:19:56.716 00:54:09 -- nvmf/common.sh@158 -- # true 00:19:56.716 00:54:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:56.716 00:54:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:56.716 00:54:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.716 00:54:09 -- nvmf/common.sh@161 -- # true 00:19:56.716 00:54:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.716 00:54:09 -- nvmf/common.sh@162 -- # true 00:19:56.716 00:54:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.716 00:54:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.716 00:54:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.716 00:54:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.716 00:54:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.974 00:54:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.974 00:54:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.974 00:54:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.974 00:54:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.974 00:54:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:56.974 00:54:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:56.974 00:54:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:56.974 00:54:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:56.974 00:54:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.974 00:54:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.974 00:54:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.974 00:54:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:56.974 00:54:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:56.974 00:54:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.974 00:54:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.974 00:54:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.974 00:54:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.974 00:54:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.974 00:54:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:56.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:56.974 00:19:56.974 --- 10.0.0.2 ping statistics --- 00:19:56.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.974 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:56.974 00:54:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:56.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:19:56.974 00:19:56.975 --- 10.0.0.3 ping statistics --- 00:19:56.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.975 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:56.975 00:54:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:56.975 00:19:56.975 --- 10.0.0.1 ping statistics --- 00:19:56.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.975 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:56.975 00:54:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.975 00:54:09 -- nvmf/common.sh@421 -- # return 0 00:19:56.975 00:54:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.975 00:54:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.975 00:54:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.975 00:54:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.975 00:54:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.975 00:54:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.975 00:54:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.975 00:54:09 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:56.975 00:54:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.975 00:54:09 -- common/autotest_common.sh@10 -- # set +x 00:19:56.975 00:54:09 -- host/identify.sh@19 -- # nvmfpid=93606 00:19:56.975 00:54:09 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.975 00:54:09 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.975 00:54:09 -- host/identify.sh@23 -- # waitforlisten 93606 00:19:56.975 00:54:09 -- common/autotest_common.sh@829 -- # '[' -z 93606 ']' 00:19:56.975 00:54:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.975 00:54:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.975 00:54:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.975 00:54:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.975 00:54:09 -- common/autotest_common.sh@10 -- # set +x 00:19:56.975 [2024-12-03 00:54:09.454974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:56.975 [2024-12-03 00:54:09.455050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.233 [2024-12-03 00:54:09.581298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.233 [2024-12-03 00:54:09.644351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:57.233 [2024-12-03 00:54:09.644509] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.233 [2024-12-03 00:54:09.644522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.233 [2024-12-03 00:54:09.644531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.233 [2024-12-03 00:54:09.644701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.233 [2024-12-03 00:54:09.645000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.233 [2024-12-03 00:54:09.645452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.233 [2024-12-03 00:54:09.645455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.168 00:54:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.168 00:54:10 -- common/autotest_common.sh@862 -- # return 0 00:19:58.168 00:54:10 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.168 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.168 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.168 [2024-12-03 00:54:10.509368] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:58.169 00:54:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 00:54:10 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.169 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 Malloc0 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.169 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:58.169 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.169 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 [2024-12-03 00:54:10.615724] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.169 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:58.169 00:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.169 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 [2024-12-03 00:54:10.631505] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:58.169 [ 00:19:58.169 { 00:19:58.169 "allow_any_host": true, 00:19:58.169 "hosts": [], 00:19:58.169 "listen_addresses": [ 00:19:58.169 { 00:19:58.169 "adrfam": "IPv4", 00:19:58.169 "traddr": "10.0.0.2", 00:19:58.169 "transport": "TCP", 00:19:58.169 "trsvcid": "4420", 00:19:58.169 "trtype": "TCP" 00:19:58.169 } 00:19:58.169 ], 00:19:58.169 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.169 "subtype": "Discovery" 00:19:58.169 }, 00:19:58.169 { 00:19:58.169 "allow_any_host": true, 00:19:58.169 "hosts": [], 00:19:58.169 "listen_addresses": [ 00:19:58.169 { 00:19:58.169 "adrfam": "IPv4", 00:19:58.169 "traddr": "10.0.0.2", 00:19:58.169 "transport": "TCP", 00:19:58.169 "trsvcid": "4420", 00:19:58.169 "trtype": "TCP" 00:19:58.169 } 00:19:58.169 ], 00:19:58.169 "max_cntlid": 65519, 00:19:58.169 "max_namespaces": 32, 00:19:58.169 "min_cntlid": 1, 00:19:58.169 "model_number": "SPDK bdev Controller", 00:19:58.169 "namespaces": [ 00:19:58.169 { 00:19:58.169 "bdev_name": "Malloc0", 00:19:58.169 "eui64": "ABCDEF0123456789", 00:19:58.169 "name": "Malloc0", 00:19:58.169 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:58.169 "nsid": 1, 00:19:58.169 "uuid": "fe1bb9c3-3bdf-49f6-b1f3-b78e1bdbc037" 00:19:58.169 } 00:19:58.169 ], 00:19:58.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.169 "serial_number": "SPDK00000000000001", 00:19:58.169 "subtype": "NVMe" 00:19:58.169 } 00:19:58.169 ] 00:19:58.169 00:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.169 00:54:10 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:58.169 [2024-12-03 00:54:10.671363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:58.169 [2024-12-03 00:54:10.671446] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93659 ] 00:19:58.430 [2024-12-03 00:54:10.811647] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:58.430 [2024-12-03 00:54:10.811717] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.430 [2024-12-03 00:54:10.811723] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.430 [2024-12-03 00:54:10.811732] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.430 [2024-12-03 00:54:10.811741] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.430 [2024-12-03 00:54:10.811881] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:58.430 [2024-12-03 00:54:10.811962] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22c2510 0 00:19:58.430 [2024-12-03 00:54:10.818482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.430 [2024-12-03 00:54:10.818501] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.430 [2024-12-03 00:54:10.818507] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.430 [2024-12-03 00:54:10.818510] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.430 [2024-12-03 00:54:10.818553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.818559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.818563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.818575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.430 [2024-12-03 00:54:10.818604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.825495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.825510] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.825515] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.825535] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.430 [2024-12-03 00:54:10.825543] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:58.430 [2024-12-03 00:54:10.825549] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:58.430 [2024-12-03 00:54:10.825564] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.825580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.825607] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.825700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.825707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.825710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.825720] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:58.430 [2024-12-03 00:54:10.825727] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:58.430 [2024-12-03 00:54:10.825734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825741] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.825748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.825765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.825829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.825835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.825838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825842] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.825862] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:58.430 [2024-12-03 00:54:10.825870] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.430 [2024-12-03 00:54:10.825877] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.825890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.825906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.825966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.825972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.825975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825978] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.825984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.430 [2024-12-03 00:54:10.825993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.825998] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826001] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.826007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.826023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.826076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.826082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.826085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.826103] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:58.430 [2024-12-03 00:54:10.826127] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:58.430 [2024-12-03 00:54:10.826138] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.430 [2024-12-03 00:54:10.826245] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:58.430 [2024-12-03 00:54:10.826272] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.430 [2024-12-03 00:54:10.826282] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826286] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.826297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.826317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.826382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.826388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.826392] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826395] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.826402] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.430 [2024-12-03 00:54:10.826438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826463] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.826470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.826504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.826562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.826568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.826571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.826580] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.430 [2024-12-03 00:54:10.826585] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:58.430 [2024-12-03 00:54:10.826592] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:58.430 [2024-12-03 00:54:10.826607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.430 [2024-12-03 00:54:10.826617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.826647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.430 [2024-12-03 00:54:10.826665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.826764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.430 [2024-12-03 00:54:10.826771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.430 [2024-12-03 00:54:10.826775] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826779] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22c2510): datao=0, datal=4096, cccid=0 00:19:58.430 [2024-12-03 00:54:10.826783] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230e8a0) on tqpair(0x22c2510): expected_datao=0, payload_size=4096 00:19:58.430 [2024-12-03 00:54:10.826791] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826796] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.826823] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.826827] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.826839] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:58.430 [2024-12-03 00:54:10.826844] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:58.430 [2024-12-03 00:54:10.826848] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:58.430 [2024-12-03 00:54:10.826863] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:58.430 [2024-12-03 00:54:10.826867] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:58.430 [2024-12-03 00:54:10.826872] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:58.430 [2024-12-03 00:54:10.826884] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.430 [2024-12-03 00:54:10.826892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.826899] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.826906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.430 [2024-12-03 00:54:10.826925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.430 [2024-12-03 00:54:10.827015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.430 [2024-12-03 00:54:10.827021] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.430 [2024-12-03 00:54:10.827024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230e8a0) on tqpair=0x22c2510 00:19:58.430 [2024-12-03 00:54:10.827036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.827049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.430 [2024-12-03 00:54:10.827055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827062] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.827067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.430 [2024-12-03 00:54:10.827073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22c2510) 00:19:58.430 [2024-12-03 00:54:10.827085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.430 [2024-12-03 00:54:10.827090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.430 [2024-12-03 00:54:10.827097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.827102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.431 [2024-12-03 00:54:10.827107] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.431 [2024-12-03 00:54:10.827119] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.431 [2024-12-03 00:54:10.827126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.827139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.431 [2024-12-03 00:54:10.827158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e8a0, cid 0, qid 0 00:19:58.431 [2024-12-03 00:54:10.827164] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ea00, cid 1, qid 0 00:19:58.431 [2024-12-03 00:54:10.827168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230eb60, cid 2, qid 0 00:19:58.431 [2024-12-03 00:54:10.827173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.431 [2024-12-03 00:54:10.827177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ee20, cid 4, qid 0 00:19:58.431 [2024-12-03 00:54:10.827278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.431 [2024-12-03 00:54:10.827284] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.431 [2024-12-03 00:54:10.827287] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ee20) on tqpair=0x22c2510 00:19:58.431 [2024-12-03 00:54:10.827297] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:58.431 [2024-12-03 00:54:10.827302] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:58.431 [2024-12-03 00:54:10.827312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.827326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.431 [2024-12-03 00:54:10.827342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ee20, cid 4, qid 0 00:19:58.431 [2024-12-03 00:54:10.827419] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.431 [2024-12-03 00:54:10.827425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.431 [2024-12-03 00:54:10.827429] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827432] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22c2510): datao=0, datal=4096, cccid=4 00:19:58.431 [2024-12-03 00:54:10.827436] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230ee20) on tqpair(0x22c2510): expected_datao=0, payload_size=4096 00:19:58.431 [2024-12-03 00:54:10.827456] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827461] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827470] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.431 [2024-12-03 00:54:10.827475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.431 [2024-12-03 00:54:10.827478] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827482] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ee20) on tqpair=0x22c2510 00:19:58.431 [2024-12-03 00:54:10.827495] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:58.431 [2024-12-03 00:54:10.827554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827563] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827567] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.827574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.431 [2024-12-03 00:54:10.827582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827585] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.827594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.431 [2024-12-03 00:54:10.827623] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ee20, cid 4, qid 0 00:19:58.431 [2024-12-03 00:54:10.827631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ef80, cid 5, qid 0 00:19:58.431 [2024-12-03 00:54:10.827752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.431 [2024-12-03 00:54:10.827758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.431 [2024-12-03 00:54:10.827763] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827767] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22c2510): datao=0, datal=1024, cccid=4 00:19:58.431 [2024-12-03 00:54:10.827771] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230ee20) on tqpair(0x22c2510): expected_datao=0, payload_size=1024 00:19:58.431 [2024-12-03 00:54:10.827778] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827782] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.431 [2024-12-03 00:54:10.827792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.431 [2024-12-03 00:54:10.827795] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.827799] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ef80) on tqpair=0x22c2510 00:19:58.431 [2024-12-03 00:54:10.870475] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.431 [2024-12-03 00:54:10.870494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.431 [2024-12-03 00:54:10.870514] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870518] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ee20) on tqpair=0x22c2510 00:19:58.431 [2024-12-03 00:54:10.870532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.870547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.431 [2024-12-03 00:54:10.870577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ee20, cid 4, qid 0 00:19:58.431 [2024-12-03 00:54:10.870667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.431 [2024-12-03 00:54:10.870673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.431 [2024-12-03 00:54:10.870676] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870679] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22c2510): datao=0, datal=3072, cccid=4 00:19:58.431 [2024-12-03 00:54:10.870683] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230ee20) on tqpair(0x22c2510): expected_datao=0, payload_size=3072 00:19:58.431 [2024-12-03 00:54:10.870690] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870694] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.431 [2024-12-03 00:54:10.870706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.431 [2024-12-03 00:54:10.870709] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ee20) on tqpair=0x22c2510 00:19:58.431 [2024-12-03 00:54:10.870722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22c2510) 00:19:58.431 [2024-12-03 00:54:10.870767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.431 [2024-12-03 00:54:10.870790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ee20, cid 4, qid 0 00:19:58.431 [2024-12-03 00:54:10.870880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.431 [2024-12-03 00:54:10.870886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.431 [2024-12-03 00:54:10.870890] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870893] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22c2510): datao=0, datal=8, cccid=4 00:19:58.431 [2024-12-03 00:54:10.870897] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230ee20) on tqpair(0x22c2510): expected_datao=0, payload_size=8 00:19:58.431 [2024-12-03 00:54:10.870904] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.870908] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.431 ===================================================== 00:19:58.431 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:58.431 ===================================================== 00:19:58.431 Controller Capabilities/Features 00:19:58.431 ================================ 00:19:58.431 Vendor ID: 0000 00:19:58.431 Subsystem Vendor ID: 0000 00:19:58.431 Serial Number: .................... 00:19:58.431 Model Number: ........................................ 00:19:58.431 Firmware Version: 24.01.1 00:19:58.431 Recommended Arb Burst: 0 00:19:58.431 IEEE OUI Identifier: 00 00 00 00:19:58.431 Multi-path I/O 00:19:58.431 May have multiple subsystem ports: No 00:19:58.431 May have multiple controllers: No 00:19:58.431 Associated with SR-IOV VF: No 00:19:58.431 Max Data Transfer Size: 131072 00:19:58.431 Max Number of Namespaces: 0 00:19:58.431 Max Number of I/O Queues: 1024 00:19:58.431 NVMe Specification Version (VS): 1.3 00:19:58.431 NVMe Specification Version (Identify): 1.3 00:19:58.431 Maximum Queue Entries: 128 00:19:58.431 Contiguous Queues Required: Yes 00:19:58.431 Arbitration Mechanisms Supported 00:19:58.431 Weighted Round Robin: Not Supported 00:19:58.431 Vendor Specific: Not Supported 00:19:58.431 Reset Timeout: 15000 ms 00:19:58.431 Doorbell Stride: 4 bytes 00:19:58.431 NVM Subsystem Reset: Not Supported 00:19:58.431 Command Sets Supported 00:19:58.431 NVM Command Set: Supported 00:19:58.431 Boot Partition: Not Supported 00:19:58.431 Memory Page Size Minimum: 4096 bytes 00:19:58.431 Memory Page Size Maximum: 4096 bytes 00:19:58.431 Persistent Memory Region: Not Supported 00:19:58.431 Optional Asynchronous Events Supported 00:19:58.431 Namespace Attribute Notices: Not Supported 00:19:58.431 Firmware Activation Notices: Not Supported 00:19:58.431 ANA Change Notices: Not Supported 00:19:58.431 PLE Aggregate Log Change Notices: Not Supported 00:19:58.431 LBA Status Info Alert Notices: Not Supported 00:19:58.431 EGE Aggregate Log Change Notices: Not Supported 00:19:58.431 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.431 Zone Descriptor Change Notices: Not Supported 00:19:58.431 Discovery Log Change Notices: Supported 00:19:58.431 Controller Attributes 00:19:58.431 128-bit Host Identifier: Not Supported 00:19:58.431 Non-Operational Permissive Mode: Not Supported 00:19:58.431 NVM Sets: Not Supported 00:19:58.431 Read Recovery Levels: Not Supported 00:19:58.431 Endurance Groups: Not Supported 00:19:58.431 Predictable Latency Mode: Not Supported 00:19:58.431 Traffic Based Keep ALive: Not Supported 00:19:58.431 Namespace Granularity: Not Supported 00:19:58.431 SQ Associations: Not Supported 00:19:58.431 UUID List: Not Supported 00:19:58.431 Multi-Domain Subsystem: Not Supported 00:19:58.431 Fixed Capacity Management: Not Supported 00:19:58.431 Variable Capacity Management: Not Supported 00:19:58.431 Delete Endurance Group: Not Supported 00:19:58.431 Delete NVM Set: Not Supported 00:19:58.431 Extended LBA Formats Supported: Not Supported 00:19:58.431 Flexible Data Placement Supported: Not Supported 00:19:58.431 00:19:58.431 Controller Memory Buffer Support 00:19:58.431 ================================ 00:19:58.431 Supported: No 00:19:58.431 00:19:58.431 Persistent Memory Region Support 00:19:58.431 ================================ 00:19:58.431 Supported: No 00:19:58.431 00:19:58.431 Admin Command Set Attributes 00:19:58.431 ============================ 00:19:58.431 Security Send/Receive: Not Supported 00:19:58.431 Format NVM: Not Supported 00:19:58.431 Firmware Activate/Download: Not Supported 00:19:58.431 Namespace Management: Not Supported 00:19:58.431 Device Self-Test: Not Supported 00:19:58.431 Directives: Not Supported 00:19:58.431 NVMe-MI: Not Supported 00:19:58.431 Virtualization Management: Not Supported 00:19:58.431 Doorbell Buffer Config: Not Supported 00:19:58.431 Get LBA Status Capability: Not Supported 00:19:58.431 Command & Feature Lockdown Capability: Not Supported 00:19:58.431 Abort Command Limit: 1 00:19:58.431 Async Event Request Limit: 4 00:19:58.431 Number of Firmware Slots: N/A 00:19:58.431 Firmware Slot 1 Read-Only: N/A 00:19:58.431 Fi[2024-12-03 00:54:10.912550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.431 [2024-12-03 00:54:10.912570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.431 [2024-12-03 00:54:10.912591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.431 [2024-12-03 00:54:10.912595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ee20) on tqpair=0x22c2510 00:19:58.431 rmware Activation Without Reset: N/A 00:19:58.431 Multiple Update Detection Support: N/A 00:19:58.431 Firmware Update Granularity: No Information Provided 00:19:58.431 Per-Namespace SMART Log: No 00:19:58.431 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.431 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:58.431 Command Effects Log Page: Not Supported 00:19:58.431 Get Log Page Extended Data: Supported 00:19:58.431 Telemetry Log Pages: Not Supported 00:19:58.431 Persistent Event Log Pages: Not Supported 00:19:58.431 Supported Log Pages Log Page: May Support 00:19:58.431 Commands Supported & Effects Log Page: Not Supported 00:19:58.431 Feature Identifiers & Effects Log Page:May Support 00:19:58.431 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.431 Data Area 4 for Telemetry Log: Not Supported 00:19:58.431 Error Log Page Entries Supported: 128 00:19:58.431 Keep Alive: Not Supported 00:19:58.431 00:19:58.431 NVM Command Set Attributes 00:19:58.431 ========================== 00:19:58.431 Submission Queue Entry Size 00:19:58.431 Max: 1 00:19:58.431 Min: 1 00:19:58.431 Completion Queue Entry Size 00:19:58.431 Max: 1 00:19:58.431 Min: 1 00:19:58.431 Number of Namespaces: 0 00:19:58.431 Compare Command: Not Supported 00:19:58.431 Write Uncorrectable Command: Not Supported 00:19:58.431 Dataset Management Command: Not Supported 00:19:58.431 Write Zeroes Command: Not Supported 00:19:58.431 Set Features Save Field: Not Supported 00:19:58.431 Reservations: Not Supported 00:19:58.431 Timestamp: Not Supported 00:19:58.431 Copy: Not Supported 00:19:58.431 Volatile Write Cache: Not Present 00:19:58.431 Atomic Write Unit (Normal): 1 00:19:58.431 Atomic Write Unit (PFail): 1 00:19:58.431 Atomic Compare & Write Unit: 1 00:19:58.431 Fused Compare & Write: Supported 00:19:58.431 Scatter-Gather List 00:19:58.431 SGL Command Set: Supported 00:19:58.431 SGL Keyed: Supported 00:19:58.431 SGL Bit Bucket Descriptor: Not Supported 00:19:58.431 SGL Metadata Pointer: Not Supported 00:19:58.431 Oversized SGL: Not Supported 00:19:58.431 SGL Metadata Address: Not Supported 00:19:58.431 SGL Offset: Supported 00:19:58.431 Transport SGL Data Block: Not Supported 00:19:58.431 Replay Protected Memory Block: Not Supported 00:19:58.431 00:19:58.431 Firmware Slot Information 00:19:58.431 ========================= 00:19:58.431 Active slot: 0 00:19:58.431 00:19:58.431 00:19:58.431 Error Log 00:19:58.431 ========= 00:19:58.431 00:19:58.431 Active Namespaces 00:19:58.431 ================= 00:19:58.431 Discovery Log Page 00:19:58.431 ================== 00:19:58.431 Generation Counter: 2 00:19:58.431 Number of Records: 2 00:19:58.431 Record Format: 0 00:19:58.431 00:19:58.431 Discovery Log Entry 0 00:19:58.431 ---------------------- 00:19:58.431 Transport Type: 3 (TCP) 00:19:58.431 Address Family: 1 (IPv4) 00:19:58.431 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:58.431 Entry Flags: 00:19:58.431 Duplicate Returned Information: 1 00:19:58.431 Explicit Persistent Connection Support for Discovery: 1 00:19:58.431 Transport Requirements: 00:19:58.431 Secure Channel: Not Required 00:19:58.431 Port ID: 0 (0x0000) 00:19:58.431 Controller ID: 65535 (0xffff) 00:19:58.431 Admin Max SQ Size: 128 00:19:58.431 Transport Service Identifier: 4420 00:19:58.431 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:58.431 Transport Address: 10.0.0.2 00:19:58.431 Discovery Log Entry 1 00:19:58.431 ---------------------- 00:19:58.431 Transport Type: 3 (TCP) 00:19:58.431 Address Family: 1 (IPv4) 00:19:58.431 Subsystem Type: 2 (NVM Subsystem) 00:19:58.431 Entry Flags: 00:19:58.431 Duplicate Returned Information: 0 00:19:58.431 Explicit Persistent Connection Support for Discovery: 0 00:19:58.431 Transport Requirements: 00:19:58.431 Secure Channel: Not Required 00:19:58.431 Port ID: 0 (0x0000) 00:19:58.431 Controller ID: 65535 (0xffff) 00:19:58.431 Admin Max SQ Size: 128 00:19:58.431 Transport Service Identifier: 4420 00:19:58.431 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:58.431 Transport Address: 10.0.0.2 [2024-12-03 00:54:10.912706] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:58.431 [2024-12-03 00:54:10.912723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.431 [2024-12-03 00:54:10.912730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.431 [2024-12-03 00:54:10.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.431 [2024-12-03 00:54:10.912741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.431 [2024-12-03 00:54:10.912750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.912754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.912757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.912765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.912789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.912859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.912866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.912869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.912873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.912881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.912885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.912888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.912895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.912916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.912989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.912995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.912998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913001] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913007] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:58.432 [2024-12-03 00:54:10.913011] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:58.432 [2024-12-03 00:54:10.913020] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913049] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913116] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913120] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913123] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913253] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913284] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913378] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913521] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913561] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913566] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913807] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.913890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.913895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.913899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913902] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.913912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.913920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.913926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.913941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.914013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.914019] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.914022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.914036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.914050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.914064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.914174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.914183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.914186] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.914201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.914216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.914234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.914289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.914295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.914299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.914313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.914327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.914343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.914403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.914423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.914427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.914430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.918483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.918498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.918503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22c2510) 00:19:58.432 [2024-12-03 00:54:10.918527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.432 [2024-12-03 00:54:10.918551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ecc0, cid 3, qid 0 00:19:58.432 [2024-12-03 00:54:10.918608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.432 [2024-12-03 00:54:10.918615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.432 [2024-12-03 00:54:10.918618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.432 [2024-12-03 00:54:10.918621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x230ecc0) on tqpair=0x22c2510 00:19:58.432 [2024-12-03 00:54:10.918630] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:58.432 00:19:58.432 00:54:10 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:58.695 [2024-12-03 00:54:10.953747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:58.695 [2024-12-03 00:54:10.953810] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93661 ] 00:19:58.695 [2024-12-03 00:54:11.091940] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:58.695 [2024-12-03 00:54:11.092003] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.695 [2024-12-03 00:54:11.092010] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.695 [2024-12-03 00:54:11.092018] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.695 [2024-12-03 00:54:11.092025] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.695 [2024-12-03 00:54:11.092111] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:58.695 [2024-12-03 00:54:11.092151] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20fe510 0 00:19:58.695 [2024-12-03 00:54:11.101474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.695 [2024-12-03 00:54:11.101496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.696 [2024-12-03 00:54:11.101518] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.696 [2024-12-03 00:54:11.101521] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.696 [2024-12-03 00:54:11.101558] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.101564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.101568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.101578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.696 [2024-12-03 00:54:11.101607] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.109456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.109475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.109495] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109499] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.109509] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.696 [2024-12-03 00:54:11.109516] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:58.696 [2024-12-03 00:54:11.109521] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:58.696 [2024-12-03 00:54:11.109534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.109550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.109578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.109648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.109654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.109657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.109666] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:58.696 [2024-12-03 00:54:11.109673] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:58.696 [2024-12-03 00:54:11.109680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109687] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.109694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.109728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.109790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.109796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.109799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.109809] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:58.696 [2024-12-03 00:54:11.109817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.696 [2024-12-03 00:54:11.109824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.109838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.109857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.109920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.109926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.109929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109933] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.109939] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.696 [2024-12-03 00:54:11.109949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.109956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.109963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.109981] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.110043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.110049] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.110053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.110062] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:58.696 [2024-12-03 00:54:11.110067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:58.696 [2024-12-03 00:54:11.110074] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.696 [2024-12-03 00:54:11.110180] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:58.696 [2024-12-03 00:54:11.110189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.696 [2024-12-03 00:54:11.110199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110203] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.110214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.110236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.110305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.110312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.110315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.110325] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.696 [2024-12-03 00:54:11.110335] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110339] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.110350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.110368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.110451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.110459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.110463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.110472] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.696 [2024-12-03 00:54:11.110476] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:58.696 [2024-12-03 00:54:11.110489] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:58.696 [2024-12-03 00:54:11.110502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.696 [2024-12-03 00:54:11.110511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110519] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.696 [2024-12-03 00:54:11.110526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.696 [2024-12-03 00:54:11.110548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.696 [2024-12-03 00:54:11.110683] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.696 [2024-12-03 00:54:11.110699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.696 [2024-12-03 00:54:11.110704] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110708] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=4096, cccid=0 00:19:58.696 [2024-12-03 00:54:11.110712] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214a8a0) on tqpair(0x20fe510): expected_datao=0, payload_size=4096 00:19:58.696 [2024-12-03 00:54:11.110720] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110724] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.696 [2024-12-03 00:54:11.110738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.696 [2024-12-03 00:54:11.110741] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.696 [2024-12-03 00:54:11.110745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.696 [2024-12-03 00:54:11.110754] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:58.697 [2024-12-03 00:54:11.110759] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:58.697 [2024-12-03 00:54:11.110763] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:58.697 [2024-12-03 00:54:11.110767] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:58.697 [2024-12-03 00:54:11.110771] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:58.697 [2024-12-03 00:54:11.110776] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.110788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.110797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.110811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.697 [2024-12-03 00:54:11.110832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.697 [2024-12-03 00:54:11.110903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.697 [2024-12-03 00:54:11.110909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.697 [2024-12-03 00:54:11.110912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214a8a0) on tqpair=0x20fe510 00:19:58.697 [2024-12-03 00:54:11.110924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.110937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.697 [2024-12-03 00:54:11.110943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110950] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.110955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.697 [2024-12-03 00:54:11.110961] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.110973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.697 [2024-12-03 00:54:11.110979] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.110986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.110991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.697 [2024-12-03 00:54:11.110996] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.111028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.697 [2024-12-03 00:54:11.111049] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214a8a0, cid 0, qid 0 00:19:58.697 [2024-12-03 00:54:11.111056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214aa00, cid 1, qid 0 00:19:58.697 [2024-12-03 00:54:11.111060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ab60, cid 2, qid 0 00:19:58.697 [2024-12-03 00:54:11.111065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.697 [2024-12-03 00:54:11.111069] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.697 [2024-12-03 00:54:11.111191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.697 [2024-12-03 00:54:11.111197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.697 [2024-12-03 00:54:11.111200] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111204] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.697 [2024-12-03 00:54:11.111210] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:58.697 [2024-12-03 00:54:11.111214] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111222] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111232] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111239] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.111254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.697 [2024-12-03 00:54:11.111273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.697 [2024-12-03 00:54:11.111360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.697 [2024-12-03 00:54:11.111366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.697 [2024-12-03 00:54:11.111370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111373] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.697 [2024-12-03 00:54:11.111457] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111469] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.111492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.697 [2024-12-03 00:54:11.111513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.697 [2024-12-03 00:54:11.111597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.697 [2024-12-03 00:54:11.111604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.697 [2024-12-03 00:54:11.111607] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111611] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=4096, cccid=4 00:19:58.697 [2024-12-03 00:54:11.111615] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214ae20) on tqpair(0x20fe510): expected_datao=0, payload_size=4096 00:19:58.697 [2024-12-03 00:54:11.111623] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111627] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.697 [2024-12-03 00:54:11.111641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.697 [2024-12-03 00:54:11.111644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111648] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.697 [2024-12-03 00:54:11.111663] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:58.697 [2024-12-03 00:54:11.111672] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.697 [2024-12-03 00:54:11.111705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.697 [2024-12-03 00:54:11.111726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.697 [2024-12-03 00:54:11.111846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.697 [2024-12-03 00:54:11.111852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.697 [2024-12-03 00:54:11.111856] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111859] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=4096, cccid=4 00:19:58.697 [2024-12-03 00:54:11.111864] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214ae20) on tqpair(0x20fe510): expected_datao=0, payload_size=4096 00:19:58.697 [2024-12-03 00:54:11.111871] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111874] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111882] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.697 [2024-12-03 00:54:11.111887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.697 [2024-12-03 00:54:11.111891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.697 [2024-12-03 00:54:11.111894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.697 [2024-12-03 00:54:11.111909] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:58.697 [2024-12-03 00:54:11.111919] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.111927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.111931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.111934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.111941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.111961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.698 [2024-12-03 00:54:11.112055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.698 [2024-12-03 00:54:11.112061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.698 [2024-12-03 00:54:11.112064] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112068] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=4096, cccid=4 00:19:58.698 [2024-12-03 00:54:11.112072] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214ae20) on tqpair(0x20fe510): expected_datao=0, payload_size=4096 00:19:58.698 [2024-12-03 00:54:11.112079] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112083] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.698 [2024-12-03 00:54:11.112096] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.698 [2024-12-03 00:54:11.112100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.698 [2024-12-03 00:54:11.112112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.112121] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.112130] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.112137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.112142] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.112147] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:58.698 [2024-12-03 00:54:11.112151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:58.698 [2024-12-03 00:54:11.112156] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:58.698 [2024-12-03 00:54:11.112179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112202] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.698 [2024-12-03 00:54:11.112239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.698 [2024-12-03 00:54:11.112247] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214af80, cid 5, qid 0 00:19:58.698 [2024-12-03 00:54:11.112320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.698 [2024-12-03 00:54:11.112326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.698 [2024-12-03 00:54:11.112330] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112333] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.698 [2024-12-03 00:54:11.112340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.698 [2024-12-03 00:54:11.112346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.698 [2024-12-03 00:54:11.112349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214af80) on tqpair=0x20fe510 00:19:58.698 [2024-12-03 00:54:11.112363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214af80, cid 5, qid 0 00:19:58.698 [2024-12-03 00:54:11.112494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.698 [2024-12-03 00:54:11.112502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.698 [2024-12-03 00:54:11.112506] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214af80) on tqpair=0x20fe510 00:19:58.698 [2024-12-03 00:54:11.112520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112528] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214af80, cid 5, qid 0 00:19:58.698 [2024-12-03 00:54:11.112648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.698 [2024-12-03 00:54:11.112654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.698 [2024-12-03 00:54:11.112658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214af80) on tqpair=0x20fe510 00:19:58.698 [2024-12-03 00:54:11.112672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214af80, cid 5, qid 0 00:19:58.698 [2024-12-03 00:54:11.112775] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.698 [2024-12-03 00:54:11.112781] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.698 [2024-12-03 00:54:11.112785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214af80) on tqpair=0x20fe510 00:19:58.698 [2024-12-03 00:54:11.112802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112845] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112877] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112881] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.112884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20fe510) 00:19:58.698 [2024-12-03 00:54:11.112890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.698 [2024-12-03 00:54:11.112909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214af80, cid 5, qid 0 00:19:58.698 [2024-12-03 00:54:11.112916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214ae20, cid 4, qid 0 00:19:58.698 [2024-12-03 00:54:11.112920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214b0e0, cid 6, qid 0 00:19:58.698 [2024-12-03 00:54:11.112925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214b240, cid 7, qid 0 00:19:58.698 [2024-12-03 00:54:11.113095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.698 [2024-12-03 00:54:11.113102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.698 [2024-12-03 00:54:11.113105] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.113109] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=8192, cccid=5 00:19:58.698 [2024-12-03 00:54:11.113113] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214af80) on tqpair(0x20fe510): expected_datao=0, payload_size=8192 00:19:58.698 [2024-12-03 00:54:11.113130] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.113135] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.113141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.698 [2024-12-03 00:54:11.113146] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.698 [2024-12-03 00:54:11.113149] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.698 [2024-12-03 00:54:11.113153] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=512, cccid=4 00:19:58.699 [2024-12-03 00:54:11.113157] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214ae20) on tqpair(0x20fe510): expected_datao=0, payload_size=512 00:19:58.699 [2024-12-03 00:54:11.113163] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113167] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.699 [2024-12-03 00:54:11.113177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.699 [2024-12-03 00:54:11.113180] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113184] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=512, cccid=6 00:19:58.699 [2024-12-03 00:54:11.113188] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214b0e0) on tqpair(0x20fe510): expected_datao=0, payload_size=512 00:19:58.699 [2024-12-03 00:54:11.113194] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113197] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.699 [2024-12-03 00:54:11.113208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.699 [2024-12-03 00:54:11.113211] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113214] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe510): datao=0, datal=4096, cccid=7 00:19:58.699 [2024-12-03 00:54:11.113218] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x214b240) on tqpair(0x20fe510): expected_datao=0, payload_size=4096 00:19:58.699 [2024-12-03 00:54:11.113225] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113229] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.699 [2024-12-03 00:54:11.113243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.699 [2024-12-03 00:54:11.113246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113250] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214af80) on tqpair=0x20fe510 00:19:58.699 ===================================================== 00:19:58.699 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.699 ===================================================== 00:19:58.699 Controller Capabilities/Features 00:19:58.699 ================================ 00:19:58.699 Vendor ID: 8086 00:19:58.699 Subsystem Vendor ID: 8086 00:19:58.699 Serial Number: SPDK00000000000001 00:19:58.699 Model Number: SPDK bdev Controller 00:19:58.699 Firmware Version: 24.01.1 00:19:58.699 Recommended Arb Burst: 6 00:19:58.699 IEEE OUI Identifier: e4 d2 5c 00:19:58.699 Multi-path I/O 00:19:58.699 May have multiple subsystem ports: Yes 00:19:58.699 May have multiple controllers: Yes 00:19:58.699 Associated with SR-IOV VF: No 00:19:58.699 Max Data Transfer Size: 131072 00:19:58.699 Max Number of Namespaces: 32 00:19:58.699 Max Number of I/O Queues: 127 00:19:58.699 NVMe Specification Version (VS): 1.3 00:19:58.699 NVMe Specification Version (Identify): 1.3 00:19:58.699 Maximum Queue Entries: 128 00:19:58.699 Contiguous Queues Required: Yes 00:19:58.699 Arbitration Mechanisms Supported 00:19:58.699 Weighted Round Robin: Not Supported 00:19:58.699 Vendor Specific: Not Supported 00:19:58.699 Reset Timeout: 15000 ms 00:19:58.699 Doorbell Stride: 4 bytes 00:19:58.699 NVM Subsystem Reset: Not Supported 00:19:58.699 Command Sets Supported 00:19:58.699 NVM Command Set: Supported 00:19:58.699 Boot Partition: Not Supported 00:19:58.699 Memory Page Size Minimum: 4096 bytes 00:19:58.699 Memory Page Size Maximum: 4096 bytes 00:19:58.699 Persistent Memory Region: Not Supported 00:19:58.699 Optional Asynchronous Events Supported 00:19:58.699 Namespace Attribute Notices: Supported 00:19:58.699 Firmware Activation Notices: Not Supported 00:19:58.699 ANA Change Notices: Not Supported 00:19:58.699 PLE Aggregate Log Change Notices: Not Supported 00:19:58.699 LBA Status Info Alert Notices: Not Supported 00:19:58.699 EGE Aggregate Log Change Notices: Not Supported 00:19:58.699 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.699 Zone Descriptor Change Notices: Not Supported 00:19:58.699 Discovery Log Change Notices: Not Supported 00:19:58.699 Controller Attributes 00:19:58.699 128-bit Host Identifier: Supported 00:19:58.699 Non-Operational Permissive Mode: Not Supported 00:19:58.699 NVM Sets: Not Supported 00:19:58.699 Read Recovery Levels: Not Supported 00:19:58.699 Endurance Groups: Not Supported 00:19:58.699 Predictable Latency Mode: Not Supported 00:19:58.699 Traffic Based Keep ALive: Not Supported 00:19:58.699 Namespace Granularity: Not Supported 00:19:58.699 SQ Associations: Not Supported 00:19:58.699 UUID List: Not Supported 00:19:58.699 Multi-Domain Subsystem: Not Supported 00:19:58.699 Fixed Capacity Management: Not Supported 00:19:58.699 Variable Capacity Management: Not Supported 00:19:58.699 Delete Endurance Group: Not Supported 00:19:58.699 Delete NVM Set: Not Supported 00:19:58.699 Extended LBA Formats Supported: Not Supported 00:19:58.699 Flexible Data Placement Supported: Not Supported 00:19:58.699 00:19:58.699 Controller Memory Buffer Support 00:19:58.699 ================================ 00:19:58.699 Supported: No 00:19:58.699 00:19:58.699 Persistent Memory Region Support 00:19:58.699 ================================ 00:19:58.699 Supported: No 00:19:58.699 00:19:58.699 Admin Command Set Attributes 00:19:58.699 ============================ 00:19:58.699 Security Send/Receive: Not Supported 00:19:58.699 Format NVM: Not Supported 00:19:58.699 Firmware Activate/Download: Not Supported 00:19:58.699 Namespace Management: Not Supported 00:19:58.699 Device Self-Test: Not Supported 00:19:58.699 Directives: Not Supported 00:19:58.699 NVMe-MI: Not Supported 00:19:58.699 Virtualization Management: Not Supported 00:19:58.699 Doorbell Buffer Config: Not Supported 00:19:58.699 Get LBA Status Capability: Not Supported 00:19:58.699 Command & Feature Lockdown Capability: Not Supported 00:19:58.699 Abort Command Limit: 4 00:19:58.699 Async Event Request Limit: 4 00:19:58.699 Number of Firmware Slots: N/A 00:19:58.699 Firmware Slot 1 Read-Only: N/A 00:19:58.699 Firmware Activation Without Reset: [2024-12-03 00:54:11.113264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.699 [2024-12-03 00:54:11.113271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.699 [2024-12-03 00:54:11.113274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214ae20) on tqpair=0x20fe510 00:19:58.699 [2024-12-03 00:54:11.113287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.699 [2024-12-03 00:54:11.113293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.699 [2024-12-03 00:54:11.113296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214b0e0) on tqpair=0x20fe510 00:19:58.699 [2024-12-03 00:54:11.113307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.699 [2024-12-03 00:54:11.113312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.699 [2024-12-03 00:54:11.113316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.699 [2024-12-03 00:54:11.113319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214b240) on tqpair=0x20fe510 00:19:58.699 N/A 00:19:58.699 Multiple Update Detection Support: N/A 00:19:58.699 Firmware Update Granularity: No Information Provided 00:19:58.699 Per-Namespace SMART Log: No 00:19:58.699 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.699 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:58.699 Command Effects Log Page: Supported 00:19:58.699 Get Log Page Extended Data: Supported 00:19:58.699 Telemetry Log Pages: Not Supported 00:19:58.699 Persistent Event Log Pages: Not Supported 00:19:58.699 Supported Log Pages Log Page: May Support 00:19:58.699 Commands Supported & Effects Log Page: Not Supported 00:19:58.699 Feature Identifiers & Effects Log Page:May Support 00:19:58.699 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.699 Data Area 4 for Telemetry Log: Not Supported 00:19:58.699 Error Log Page Entries Supported: 128 00:19:58.699 Keep Alive: Supported 00:19:58.699 Keep Alive Granularity: 10000 ms 00:19:58.699 00:19:58.699 NVM Command Set Attributes 00:19:58.699 ========================== 00:19:58.699 Submission Queue Entry Size 00:19:58.699 Max: 64 00:19:58.699 Min: 64 00:19:58.699 Completion Queue Entry Size 00:19:58.699 Max: 16 00:19:58.699 Min: 16 00:19:58.699 Number of Namespaces: 32 00:19:58.699 Compare Command: Supported 00:19:58.699 Write Uncorrectable Command: Not Supported 00:19:58.699 Dataset Management Command: Supported 00:19:58.699 Write Zeroes Command: Supported 00:19:58.699 Set Features Save Field: Not Supported 00:19:58.699 Reservations: Supported 00:19:58.699 Timestamp: Not Supported 00:19:58.699 Copy: Supported 00:19:58.699 Volatile Write Cache: Present 00:19:58.699 Atomic Write Unit (Normal): 1 00:19:58.699 Atomic Write Unit (PFail): 1 00:19:58.699 Atomic Compare & Write Unit: 1 00:19:58.699 Fused Compare & Write: Supported 00:19:58.699 Scatter-Gather List 00:19:58.699 SGL Command Set: Supported 00:19:58.699 SGL Keyed: Supported 00:19:58.699 SGL Bit Bucket Descriptor: Not Supported 00:19:58.699 SGL Metadata Pointer: Not Supported 00:19:58.699 Oversized SGL: Not Supported 00:19:58.699 SGL Metadata Address: Not Supported 00:19:58.699 SGL Offset: Supported 00:19:58.699 Transport SGL Data Block: Not Supported 00:19:58.700 Replay Protected Memory Block: Not Supported 00:19:58.700 00:19:58.700 Firmware Slot Information 00:19:58.700 ========================= 00:19:58.700 Active slot: 1 00:19:58.700 Slot 1 Firmware Revision: 24.01.1 00:19:58.700 00:19:58.700 00:19:58.700 Commands Supported and Effects 00:19:58.700 ============================== 00:19:58.700 Admin Commands 00:19:58.700 -------------- 00:19:58.700 Get Log Page (02h): Supported 00:19:58.700 Identify (06h): Supported 00:19:58.700 Abort (08h): Supported 00:19:58.700 Set Features (09h): Supported 00:19:58.700 Get Features (0Ah): Supported 00:19:58.700 Asynchronous Event Request (0Ch): Supported 00:19:58.700 Keep Alive (18h): Supported 00:19:58.700 I/O Commands 00:19:58.700 ------------ 00:19:58.700 Flush (00h): Supported LBA-Change 00:19:58.700 Write (01h): Supported LBA-Change 00:19:58.700 Read (02h): Supported 00:19:58.700 Compare (05h): Supported 00:19:58.700 Write Zeroes (08h): Supported LBA-Change 00:19:58.700 Dataset Management (09h): Supported LBA-Change 00:19:58.700 Copy (19h): Supported LBA-Change 00:19:58.700 Unknown (79h): Supported LBA-Change 00:19:58.700 Unknown (7Ah): Supported 00:19:58.700 00:19:58.700 Error Log 00:19:58.700 ========= 00:19:58.700 00:19:58.700 Arbitration 00:19:58.700 =========== 00:19:58.700 Arbitration Burst: 1 00:19:58.700 00:19:58.700 Power Management 00:19:58.700 ================ 00:19:58.700 Number of Power States: 1 00:19:58.700 Current Power State: Power State #0 00:19:58.700 Power State #0: 00:19:58.700 Max Power: 0.00 W 00:19:58.700 Non-Operational State: Operational 00:19:58.700 Entry Latency: Not Reported 00:19:58.700 Exit Latency: Not Reported 00:19:58.700 Relative Read Throughput: 0 00:19:58.700 Relative Read Latency: 0 00:19:58.700 Relative Write Throughput: 0 00:19:58.700 Relative Write Latency: 0 00:19:58.700 Idle Power: Not Reported 00:19:58.700 Active Power: Not Reported 00:19:58.700 Non-Operational Permissive Mode: Not Supported 00:19:58.700 00:19:58.700 Health Information 00:19:58.700 ================== 00:19:58.700 Critical Warnings: 00:19:58.700 Available Spare Space: OK 00:19:58.700 Temperature: OK 00:19:58.700 Device Reliability: OK 00:19:58.700 Read Only: No 00:19:58.700 Volatile Memory Backup: OK 00:19:58.700 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:58.700 Temperature Threshold: [2024-12-03 00:54:11.113413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.113420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.113423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20fe510) 00:19:58.700 [2024-12-03 00:54:11.116494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.700 [2024-12-03 00:54:11.116537] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214b240, cid 7, qid 0 00:19:58.700 [2024-12-03 00:54:11.116611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.700 [2024-12-03 00:54:11.116618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.700 [2024-12-03 00:54:11.116621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.116625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214b240) on tqpair=0x20fe510 00:19:58.700 [2024-12-03 00:54:11.116666] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:58.700 [2024-12-03 00:54:11.116679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.700 [2024-12-03 00:54:11.116686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.700 [2024-12-03 00:54:11.116691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.700 [2024-12-03 00:54:11.116696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.700 [2024-12-03 00:54:11.116720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.116724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.116743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.700 [2024-12-03 00:54:11.116750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.700 [2024-12-03 00:54:11.116773] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.700 [2024-12-03 00:54:11.116840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.700 [2024-12-03 00:54:11.116846] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.700 [2024-12-03 00:54:11.116850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.116853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.700 [2024-12-03 00:54:11.116861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.116865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.116869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.700 [2024-12-03 00:54:11.116875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.700 [2024-12-03 00:54:11.116896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.700 [2024-12-03 00:54:11.116987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.700 [2024-12-03 00:54:11.117013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.700 [2024-12-03 00:54:11.117017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117021] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.700 [2024-12-03 00:54:11.117027] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:58.700 [2024-12-03 00:54:11.117031] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:58.700 [2024-12-03 00:54:11.117041] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.700 [2024-12-03 00:54:11.117056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.700 [2024-12-03 00:54:11.117074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.700 [2024-12-03 00:54:11.117168] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.700 [2024-12-03 00:54:11.117186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.700 [2024-12-03 00:54:11.117190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.700 [2024-12-03 00:54:11.117203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.700 [2024-12-03 00:54:11.117232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.700 [2024-12-03 00:54:11.117249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.700 [2024-12-03 00:54:11.117329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.700 [2024-12-03 00:54:11.117335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.700 [2024-12-03 00:54:11.117338] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.700 [2024-12-03 00:54:11.117352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117356] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.700 [2024-12-03 00:54:11.117360] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.700 [2024-12-03 00:54:11.117366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.700 [2024-12-03 00:54:11.117383] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.700 [2024-12-03 00:54:11.117480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.700 [2024-12-03 00:54:11.117494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.700 [2024-12-03 00:54:11.117498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.117512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.117527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.117547] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.117614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.117627] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.117631] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.117645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.117659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.117678] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.117747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.117775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.117779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.117793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.117806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.117824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.117888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.117909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.117913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.117926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.117934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.117940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.117958] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118042] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118045] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118194] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118326] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118330] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118348] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118475] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118504] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118549] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118643] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118769] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.118884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.118894] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.118898] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118901] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.118912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.118920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.118926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.118945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.119012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.119018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.119021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.119025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.701 [2024-12-03 00:54:11.119035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.119039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.701 [2024-12-03 00:54:11.119043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.701 [2024-12-03 00:54:11.119050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.701 [2024-12-03 00:54:11.119067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.701 [2024-12-03 00:54:11.119145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.701 [2024-12-03 00:54:11.119155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.701 [2024-12-03 00:54:11.119159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.119172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119177] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.119187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.119204] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.119264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.119274] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.119278] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119281] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.119292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.119306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.119323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.119397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.119403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.119406] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.119434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.119477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.119497] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.119568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.119575] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.119578] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119582] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.119593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.119608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.119627] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.119696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.119702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.119706] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119710] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.119720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.119735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.119753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.119835] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.119860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.119864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119867] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.119893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.119901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.119907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.119925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.120005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.120011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.120014] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.120028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.120043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.120060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.120122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.120128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.120132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.120145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120150] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120153] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.120160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.120178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.120244] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.120250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.120253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120257] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.120267] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.120282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.120299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.120359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.120370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.120374] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120378] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.120388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.120396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.120402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.120420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.124427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.124445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.124450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.124454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.124467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.124472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.124475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe510) 00:19:58.702 [2024-12-03 00:54:11.124482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.702 [2024-12-03 00:54:11.124507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x214acc0, cid 3, qid 0 00:19:58.702 [2024-12-03 00:54:11.124587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.702 [2024-12-03 00:54:11.124594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.702 [2024-12-03 00:54:11.124597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.702 [2024-12-03 00:54:11.124601] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x214acc0) on tqpair=0x20fe510 00:19:58.702 [2024-12-03 00:54:11.124609] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:19:58.702 0 Kelvin (-273 Celsius) 00:19:58.702 Available Spare: 0% 00:19:58.702 Available Spare Threshold: 0% 00:19:58.702 Life Percentage Used: 0% 00:19:58.702 Data Units Read: 0 00:19:58.702 Data Units Written: 0 00:19:58.702 Host Read Commands: 0 00:19:58.702 Host Write Commands: 0 00:19:58.702 Controller Busy Time: 0 minutes 00:19:58.702 Power Cycles: 0 00:19:58.702 Power On Hours: 0 hours 00:19:58.702 Unsafe Shutdowns: 0 00:19:58.703 Unrecoverable Media Errors: 0 00:19:58.703 Lifetime Error Log Entries: 0 00:19:58.703 Warning Temperature Time: 0 minutes 00:19:58.703 Critical Temperature Time: 0 minutes 00:19:58.703 00:19:58.703 Number of Queues 00:19:58.703 ================ 00:19:58.703 Number of I/O Submission Queues: 127 00:19:58.703 Number of I/O Completion Queues: 127 00:19:58.703 00:19:58.703 Active Namespaces 00:19:58.703 ================= 00:19:58.703 Namespace ID:1 00:19:58.703 Error Recovery Timeout: Unlimited 00:19:58.703 Command Set Identifier: NVM (00h) 00:19:58.703 Deallocate: Supported 00:19:58.703 Deallocated/Unwritten Error: Not Supported 00:19:58.703 Deallocated Read Value: Unknown 00:19:58.703 Deallocate in Write Zeroes: Not Supported 00:19:58.703 Deallocated Guard Field: 0xFFFF 00:19:58.703 Flush: Supported 00:19:58.703 Reservation: Supported 00:19:58.703 Namespace Sharing Capabilities: Multiple Controllers 00:19:58.703 Size (in LBAs): 131072 (0GiB) 00:19:58.703 Capacity (in LBAs): 131072 (0GiB) 00:19:58.703 Utilization (in LBAs): 131072 (0GiB) 00:19:58.703 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:58.703 EUI64: ABCDEF0123456789 00:19:58.703 UUID: fe1bb9c3-3bdf-49f6-b1f3-b78e1bdbc037 00:19:58.703 Thin Provisioning: Not Supported 00:19:58.703 Per-NS Atomic Units: Yes 00:19:58.703 Atomic Boundary Size (Normal): 0 00:19:58.703 Atomic Boundary Size (PFail): 0 00:19:58.703 Atomic Boundary Offset: 0 00:19:58.703 Maximum Single Source Range Length: 65535 00:19:58.703 Maximum Copy Length: 65535 00:19:58.703 Maximum Source Range Count: 1 00:19:58.703 NGUID/EUI64 Never Reused: No 00:19:58.703 Namespace Write Protected: No 00:19:58.703 Number of LBA Formats: 1 00:19:58.703 Current LBA Format: LBA Format #00 00:19:58.703 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:58.703 00:19:58.703 00:54:11 -- host/identify.sh@51 -- # sync 00:19:58.962 00:54:11 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.962 00:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.962 00:54:11 -- common/autotest_common.sh@10 -- # set +x 00:19:58.962 00:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.962 00:54:11 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:58.962 00:54:11 -- host/identify.sh@56 -- # nvmftestfini 00:19:58.962 00:54:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.962 00:54:11 -- nvmf/common.sh@116 -- # sync 00:19:58.962 00:54:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.962 00:54:11 -- nvmf/common.sh@119 -- # set +e 00:19:58.962 00:54:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.962 00:54:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.962 rmmod nvme_tcp 00:19:58.962 rmmod nvme_fabrics 00:19:58.962 rmmod nvme_keyring 00:19:58.962 00:54:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.962 00:54:11 -- nvmf/common.sh@123 -- # set -e 00:19:58.962 00:54:11 -- nvmf/common.sh@124 -- # return 0 00:19:58.962 00:54:11 -- nvmf/common.sh@477 -- # '[' -n 93606 ']' 00:19:58.962 00:54:11 -- nvmf/common.sh@478 -- # killprocess 93606 00:19:58.962 00:54:11 -- common/autotest_common.sh@936 -- # '[' -z 93606 ']' 00:19:58.962 00:54:11 -- common/autotest_common.sh@940 -- # kill -0 93606 00:19:58.962 00:54:11 -- common/autotest_common.sh@941 -- # uname 00:19:58.962 00:54:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.962 00:54:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93606 00:19:58.962 killing process with pid 93606 00:19:58.962 00:54:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:58.962 00:54:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:58.962 00:54:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93606' 00:19:58.962 00:54:11 -- common/autotest_common.sh@955 -- # kill 93606 00:19:58.962 [2024-12-03 00:54:11.295300] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:58.962 00:54:11 -- common/autotest_common.sh@960 -- # wait 93606 00:19:59.221 00:54:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:59.221 00:54:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:59.221 00:54:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:59.221 00:54:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.221 00:54:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:59.221 00:54:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.221 00:54:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.221 00:54:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.221 00:54:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:59.221 00:19:59.221 real 0m2.740s 00:19:59.221 user 0m7.870s 00:19:59.221 sys 0m0.717s 00:19:59.221 00:54:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:59.221 00:54:11 -- common/autotest_common.sh@10 -- # set +x 00:19:59.221 ************************************ 00:19:59.221 END TEST nvmf_identify 00:19:59.222 ************************************ 00:19:59.222 00:54:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:59.222 00:54:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.222 00:54:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.222 00:54:11 -- common/autotest_common.sh@10 -- # set +x 00:19:59.222 ************************************ 00:19:59.222 START TEST nvmf_perf 00:19:59.222 ************************************ 00:19:59.222 00:54:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:59.222 * Looking for test storage... 00:19:59.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.222 00:54:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:59.222 00:54:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:59.222 00:54:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:59.481 00:54:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:59.481 00:54:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:59.481 00:54:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:59.481 00:54:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:59.481 00:54:11 -- scripts/common.sh@335 -- # IFS=.-: 00:19:59.481 00:54:11 -- scripts/common.sh@335 -- # read -ra ver1 00:19:59.481 00:54:11 -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.481 00:54:11 -- scripts/common.sh@336 -- # read -ra ver2 00:19:59.481 00:54:11 -- scripts/common.sh@337 -- # local 'op=<' 00:19:59.481 00:54:11 -- scripts/common.sh@339 -- # ver1_l=2 00:19:59.481 00:54:11 -- scripts/common.sh@340 -- # ver2_l=1 00:19:59.481 00:54:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:59.481 00:54:11 -- scripts/common.sh@343 -- # case "$op" in 00:19:59.481 00:54:11 -- scripts/common.sh@344 -- # : 1 00:19:59.481 00:54:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:59.481 00:54:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.481 00:54:11 -- scripts/common.sh@364 -- # decimal 1 00:19:59.481 00:54:11 -- scripts/common.sh@352 -- # local d=1 00:19:59.481 00:54:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.481 00:54:11 -- scripts/common.sh@354 -- # echo 1 00:19:59.481 00:54:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:59.481 00:54:11 -- scripts/common.sh@365 -- # decimal 2 00:19:59.481 00:54:11 -- scripts/common.sh@352 -- # local d=2 00:19:59.481 00:54:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.481 00:54:11 -- scripts/common.sh@354 -- # echo 2 00:19:59.481 00:54:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:59.481 00:54:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:59.481 00:54:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:59.481 00:54:11 -- scripts/common.sh@367 -- # return 0 00:19:59.481 00:54:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.481 00:54:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.481 --rc genhtml_branch_coverage=1 00:19:59.481 --rc genhtml_function_coverage=1 00:19:59.481 --rc genhtml_legend=1 00:19:59.481 --rc geninfo_all_blocks=1 00:19:59.481 --rc geninfo_unexecuted_blocks=1 00:19:59.481 00:19:59.481 ' 00:19:59.481 00:54:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.481 --rc genhtml_branch_coverage=1 00:19:59.481 --rc genhtml_function_coverage=1 00:19:59.481 --rc genhtml_legend=1 00:19:59.481 --rc geninfo_all_blocks=1 00:19:59.481 --rc geninfo_unexecuted_blocks=1 00:19:59.481 00:19:59.481 ' 00:19:59.481 00:54:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.481 --rc genhtml_branch_coverage=1 00:19:59.481 --rc genhtml_function_coverage=1 00:19:59.481 --rc genhtml_legend=1 00:19:59.481 --rc geninfo_all_blocks=1 00:19:59.481 --rc geninfo_unexecuted_blocks=1 00:19:59.481 00:19:59.481 ' 00:19:59.481 00:54:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.481 --rc genhtml_branch_coverage=1 00:19:59.481 --rc genhtml_function_coverage=1 00:19:59.481 --rc genhtml_legend=1 00:19:59.481 --rc geninfo_all_blocks=1 00:19:59.481 --rc geninfo_unexecuted_blocks=1 00:19:59.481 00:19:59.481 ' 00:19:59.482 00:54:11 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.482 00:54:11 -- nvmf/common.sh@7 -- # uname -s 00:19:59.482 00:54:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.482 00:54:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.482 00:54:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.482 00:54:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.482 00:54:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.482 00:54:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.482 00:54:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.482 00:54:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.482 00:54:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.482 00:54:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.482 00:54:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:59.482 00:54:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:19:59.482 00:54:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.482 00:54:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.482 00:54:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.482 00:54:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.482 00:54:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.482 00:54:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.482 00:54:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.482 00:54:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.482 00:54:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.482 00:54:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.482 00:54:11 -- paths/export.sh@5 -- # export PATH 00:19:59.482 00:54:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.482 00:54:11 -- nvmf/common.sh@46 -- # : 0 00:19:59.482 00:54:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.482 00:54:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.482 00:54:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.482 00:54:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.482 00:54:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.482 00:54:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.482 00:54:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.482 00:54:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.482 00:54:11 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:59.482 00:54:11 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:59.482 00:54:11 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.482 00:54:11 -- host/perf.sh@17 -- # nvmftestinit 00:19:59.482 00:54:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.482 00:54:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.482 00:54:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.482 00:54:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.482 00:54:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.482 00:54:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.482 00:54:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.482 00:54:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.482 00:54:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:59.482 00:54:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:59.482 00:54:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:59.482 00:54:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:59.482 00:54:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:59.482 00:54:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:59.482 00:54:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.482 00:54:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.482 00:54:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.482 00:54:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:59.482 00:54:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.482 00:54:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.482 00:54:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.482 00:54:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.482 00:54:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.482 00:54:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.482 00:54:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.482 00:54:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.482 00:54:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:59.482 00:54:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:59.482 Cannot find device "nvmf_tgt_br" 00:19:59.482 00:54:11 -- nvmf/common.sh@154 -- # true 00:19:59.482 00:54:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.482 Cannot find device "nvmf_tgt_br2" 00:19:59.482 00:54:11 -- nvmf/common.sh@155 -- # true 00:19:59.482 00:54:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:59.482 00:54:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:59.482 Cannot find device "nvmf_tgt_br" 00:19:59.482 00:54:11 -- nvmf/common.sh@157 -- # true 00:19:59.482 00:54:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:59.482 Cannot find device "nvmf_tgt_br2" 00:19:59.482 00:54:11 -- nvmf/common.sh@158 -- # true 00:19:59.482 00:54:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:59.482 00:54:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:59.482 00:54:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.482 00:54:11 -- nvmf/common.sh@161 -- # true 00:19:59.482 00:54:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.482 00:54:11 -- nvmf/common.sh@162 -- # true 00:19:59.482 00:54:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.741 00:54:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.741 00:54:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.741 00:54:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.741 00:54:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.741 00:54:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.741 00:54:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.741 00:54:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:59.741 00:54:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:59.741 00:54:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:59.741 00:54:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:59.741 00:54:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:59.741 00:54:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:59.741 00:54:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.741 00:54:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.741 00:54:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.741 00:54:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:59.741 00:54:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:59.741 00:54:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.741 00:54:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.741 00:54:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.741 00:54:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.741 00:54:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.741 00:54:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:59.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:59.741 00:19:59.741 --- 10.0.0.2 ping statistics --- 00:19:59.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.741 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:59.741 00:54:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:59.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:19:59.741 00:19:59.741 --- 10.0.0.3 ping statistics --- 00:19:59.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.742 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:59.742 00:54:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:59.742 00:19:59.742 --- 10.0.0.1 ping statistics --- 00:19:59.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.742 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:59.742 00:54:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.742 00:54:12 -- nvmf/common.sh@421 -- # return 0 00:19:59.742 00:54:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:59.742 00:54:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.742 00:54:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:59.742 00:54:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:59.742 00:54:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.742 00:54:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:59.742 00:54:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:59.742 00:54:12 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:59.742 00:54:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:59.742 00:54:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.742 00:54:12 -- common/autotest_common.sh@10 -- # set +x 00:19:59.742 00:54:12 -- nvmf/common.sh@469 -- # nvmfpid=93839 00:19:59.742 00:54:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:59.742 00:54:12 -- nvmf/common.sh@470 -- # waitforlisten 93839 00:19:59.742 00:54:12 -- common/autotest_common.sh@829 -- # '[' -z 93839 ']' 00:19:59.742 00:54:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.742 00:54:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.742 00:54:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.742 00:54:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.742 00:54:12 -- common/autotest_common.sh@10 -- # set +x 00:20:00.001 [2024-12-03 00:54:12.298349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:00.001 [2024-12-03 00:54:12.298456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.001 [2024-12-03 00:54:12.442598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.260 [2024-12-03 00:54:12.532857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.260 [2024-12-03 00:54:12.533027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.260 [2024-12-03 00:54:12.533043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.260 [2024-12-03 00:54:12.533055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.260 [2024-12-03 00:54:12.533407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.260 [2024-12-03 00:54:12.533544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.260 [2024-12-03 00:54:12.534035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.260 [2024-12-03 00:54:12.534079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.828 00:54:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.828 00:54:13 -- common/autotest_common.sh@862 -- # return 0 00:20:00.828 00:54:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:00.828 00:54:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.828 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.087 00:54:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.087 00:54:13 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:01.087 00:54:13 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:01.345 00:54:13 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:01.345 00:54:13 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:01.604 00:54:14 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:01.604 00:54:14 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:02.172 00:54:14 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:02.172 00:54:14 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:02.172 00:54:14 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:02.172 00:54:14 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:02.172 00:54:14 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.172 [2024-12-03 00:54:14.576923] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.172 00:54:14 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.432 00:54:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:02.432 00:54:14 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.694 00:54:15 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:02.694 00:54:15 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:02.971 00:54:15 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.971 [2024-12-03 00:54:15.414413] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.971 00:54:15 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.277 00:54:15 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:03.277 00:54:15 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:03.277 00:54:15 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:03.277 00:54:15 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:04.232 Initializing NVMe Controllers 00:20:04.232 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:04.232 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:04.232 Initialization complete. Launching workers. 00:20:04.232 ======================================================== 00:20:04.232 Latency(us) 00:20:04.232 Device Information : IOPS MiB/s Average min max 00:20:04.232 PCIE (0000:00:06.0) NSID 1 from core 0: 21156.20 82.64 1512.83 374.04 8461.69 00:20:04.232 ======================================================== 00:20:04.232 Total : 21156.20 82.64 1512.83 374.04 8461.69 00:20:04.232 00:20:04.232 00:54:16 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:05.610 Initializing NVMe Controllers 00:20:05.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:05.610 Initialization complete. Launching workers. 00:20:05.610 ======================================================== 00:20:05.610 Latency(us) 00:20:05.610 Device Information : IOPS MiB/s Average min max 00:20:05.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3580.67 13.99 279.04 99.92 7103.90 00:20:05.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.54 4894.44 12028.77 00:20:05.610 ======================================================== 00:20:05.610 Total : 3705.17 14.47 541.69 99.92 12028.77 00:20:05.610 00:20:05.610 00:54:18 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.989 Initializing NVMe Controllers 00:20:06.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:06.989 Initialization complete. Launching workers. 00:20:06.989 ======================================================== 00:20:06.989 Latency(us) 00:20:06.989 Device Information : IOPS MiB/s Average min max 00:20:06.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10503.41 41.03 3047.34 580.66 8087.70 00:20:06.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2658.99 10.39 12147.18 6557.83 23017.39 00:20:06.989 ======================================================== 00:20:06.989 Total : 13162.41 51.42 4885.63 580.66 23017.39 00:20:06.989 00:20:06.989 00:54:19 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:06.989 00:54:19 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.533 Initializing NVMe Controllers 00:20:09.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.533 Controller IO queue size 128, less than required. 00:20:09.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.533 Controller IO queue size 128, less than required. 00:20:09.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:09.533 Initialization complete. Launching workers. 00:20:09.533 ======================================================== 00:20:09.533 Latency(us) 00:20:09.533 Device Information : IOPS MiB/s Average min max 00:20:09.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1753.89 438.47 74234.09 51807.55 116363.39 00:20:09.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.43 149.11 220178.79 93449.85 362966.46 00:20:09.533 ======================================================== 00:20:09.533 Total : 2350.32 587.58 111269.92 51807.55 362966.46 00:20:09.533 00:20:09.533 00:54:21 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:09.793 No valid NVMe controllers or AIO or URING devices found 00:20:09.793 Initializing NVMe Controllers 00:20:09.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.793 Controller IO queue size 128, less than required. 00:20:09.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.793 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:09.793 Controller IO queue size 128, less than required. 00:20:09.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.793 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:09.793 WARNING: Some requested NVMe devices were skipped 00:20:09.793 00:54:22 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:12.328 Initializing NVMe Controllers 00:20:12.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.328 Controller IO queue size 128, less than required. 00:20:12.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.328 Controller IO queue size 128, less than required. 00:20:12.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:12.328 Initialization complete. Launching workers. 00:20:12.328 00:20:12.328 ==================== 00:20:12.328 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:12.328 TCP transport: 00:20:12.328 polls: 8736 00:20:12.328 idle_polls: 6130 00:20:12.328 sock_completions: 2606 00:20:12.328 nvme_completions: 4874 00:20:12.328 submitted_requests: 7538 00:20:12.328 queued_requests: 1 00:20:12.328 00:20:12.328 ==================== 00:20:12.328 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:12.328 TCP transport: 00:20:12.328 polls: 13585 00:20:12.328 idle_polls: 10494 00:20:12.328 sock_completions: 3091 00:20:12.328 nvme_completions: 5624 00:20:12.328 submitted_requests: 8670 00:20:12.328 queued_requests: 1 00:20:12.328 ======================================================== 00:20:12.328 Latency(us) 00:20:12.328 Device Information : IOPS MiB/s Average min max 00:20:12.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1280.60 320.15 102223.17 51900.65 173709.34 00:20:12.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1467.89 366.97 87432.17 44913.17 125916.28 00:20:12.328 ======================================================== 00:20:12.328 Total : 2748.49 687.12 94323.71 44913.17 173709.34 00:20:12.328 00:20:12.328 00:54:24 -- host/perf.sh@66 -- # sync 00:20:12.328 00:54:24 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.587 00:54:25 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:12.587 00:54:25 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:12.587 00:54:25 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:12.845 00:54:25 -- host/perf.sh@72 -- # ls_guid=4dc22065-6051-4c58-ae03-83f59b37b793 00:20:12.845 00:54:25 -- host/perf.sh@73 -- # get_lvs_free_mb 4dc22065-6051-4c58-ae03-83f59b37b793 00:20:12.845 00:54:25 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4dc22065-6051-4c58-ae03-83f59b37b793 00:20:12.845 00:54:25 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:12.845 00:54:25 -- common/autotest_common.sh@1355 -- # local fc 00:20:12.845 00:54:25 -- common/autotest_common.sh@1356 -- # local cs 00:20:12.845 00:54:25 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:13.102 00:54:25 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:13.102 { 00:20:13.102 "base_bdev": "Nvme0n1", 00:20:13.102 "block_size": 4096, 00:20:13.103 "cluster_size": 4194304, 00:20:13.103 "free_clusters": 1278, 00:20:13.103 "name": "lvs_0", 00:20:13.103 "total_data_clusters": 1278, 00:20:13.103 "uuid": "4dc22065-6051-4c58-ae03-83f59b37b793" 00:20:13.103 } 00:20:13.103 ]' 00:20:13.103 00:54:25 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4dc22065-6051-4c58-ae03-83f59b37b793") .free_clusters' 00:20:13.361 00:54:25 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:13.361 00:54:25 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4dc22065-6051-4c58-ae03-83f59b37b793") .cluster_size' 00:20:13.361 5112 00:20:13.361 00:54:25 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:13.361 00:54:25 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:13.361 00:54:25 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:13.361 00:54:25 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:13.361 00:54:25 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4dc22065-6051-4c58-ae03-83f59b37b793 lbd_0 5112 00:20:13.619 00:54:25 -- host/perf.sh@80 -- # lb_guid=26915f27-0f7f-48ef-9ba8-3f86af638f38 00:20:13.619 00:54:25 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 26915f27-0f7f-48ef-9ba8-3f86af638f38 lvs_n_0 00:20:13.878 00:54:26 -- host/perf.sh@83 -- # ls_nested_guid=4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2 00:20:13.878 00:54:26 -- host/perf.sh@84 -- # get_lvs_free_mb 4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2 00:20:13.878 00:54:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2 00:20:13.878 00:54:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:13.878 00:54:26 -- common/autotest_common.sh@1355 -- # local fc 00:20:13.878 00:54:26 -- common/autotest_common.sh@1356 -- # local cs 00:20:13.878 00:54:26 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:14.136 00:54:26 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:14.136 { 00:20:14.136 "base_bdev": "Nvme0n1", 00:20:14.136 "block_size": 4096, 00:20:14.136 "cluster_size": 4194304, 00:20:14.136 "free_clusters": 0, 00:20:14.136 "name": "lvs_0", 00:20:14.136 "total_data_clusters": 1278, 00:20:14.136 "uuid": "4dc22065-6051-4c58-ae03-83f59b37b793" 00:20:14.136 }, 00:20:14.136 { 00:20:14.136 "base_bdev": "26915f27-0f7f-48ef-9ba8-3f86af638f38", 00:20:14.136 "block_size": 4096, 00:20:14.136 "cluster_size": 4194304, 00:20:14.136 "free_clusters": 1276, 00:20:14.136 "name": "lvs_n_0", 00:20:14.136 "total_data_clusters": 1276, 00:20:14.136 "uuid": "4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2" 00:20:14.136 } 00:20:14.136 ]' 00:20:14.136 00:54:26 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2") .free_clusters' 00:20:14.136 00:54:26 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:14.136 00:54:26 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2") .cluster_size' 00:20:14.136 00:54:26 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:14.136 00:54:26 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:14.136 5104 00:20:14.136 00:54:26 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:14.136 00:54:26 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:14.136 00:54:26 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4c4be4e6-86ca-4b26-bd83-8e8ca08d9ab2 lbd_nest_0 5104 00:20:14.395 00:54:26 -- host/perf.sh@88 -- # lb_nested_guid=da91cdce-1629-44b2-9f20-596847abf252 00:20:14.395 00:54:26 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.654 00:54:27 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:14.654 00:54:27 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 da91cdce-1629-44b2-9f20-596847abf252 00:20:14.913 00:54:27 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.171 00:54:27 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:15.171 00:54:27 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:15.171 00:54:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:15.171 00:54:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:15.171 00:54:27 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:15.429 No valid NVMe controllers or AIO or URING devices found 00:20:15.430 Initializing NVMe Controllers 00:20:15.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.430 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:15.430 WARNING: Some requested NVMe devices were skipped 00:20:15.430 00:54:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:15.430 00:54:27 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:27.634 Initializing NVMe Controllers 00:20:27.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:27.634 Initialization complete. Launching workers. 00:20:27.634 ======================================================== 00:20:27.634 Latency(us) 00:20:27.634 Device Information : IOPS MiB/s Average min max 00:20:27.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 853.70 106.71 1171.00 389.45 8350.29 00:20:27.634 ======================================================== 00:20:27.634 Total : 853.70 106.71 1171.00 389.45 8350.29 00:20:27.634 00:20:27.634 00:54:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:27.634 00:54:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:27.634 00:54:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:27.634 No valid NVMe controllers or AIO or URING devices found 00:20:27.634 Initializing NVMe Controllers 00:20:27.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.634 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:27.634 WARNING: Some requested NVMe devices were skipped 00:20:27.634 00:54:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:27.634 00:54:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.610 Initializing NVMe Controllers 00:20:37.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.610 Initialization complete. Launching workers. 00:20:37.610 ======================================================== 00:20:37.610 Latency(us) 00:20:37.610 Device Information : IOPS MiB/s Average min max 00:20:37.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1091.14 136.39 29378.59 8057.61 244714.53 00:20:37.610 ======================================================== 00:20:37.611 Total : 1091.14 136.39 29378.59 8057.61 244714.53 00:20:37.611 00:20:37.611 00:54:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:37.611 00:54:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:37.611 00:54:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.611 No valid NVMe controllers or AIO or URING devices found 00:20:37.611 Initializing NVMe Controllers 00:20:37.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.611 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:37.611 WARNING: Some requested NVMe devices were skipped 00:20:37.611 00:54:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:37.611 00:54:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.583 Initializing NVMe Controllers 00:20:47.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.583 Controller IO queue size 128, less than required. 00:20:47.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.583 Initialization complete. Launching workers. 00:20:47.583 ======================================================== 00:20:47.583 Latency(us) 00:20:47.583 Device Information : IOPS MiB/s Average min max 00:20:47.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3920.03 490.00 32724.39 11490.09 67440.53 00:20:47.583 ======================================================== 00:20:47.583 Total : 3920.03 490.00 32724.39 11490.09 67440.53 00:20:47.583 00:20:47.583 00:54:59 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.583 00:54:59 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete da91cdce-1629-44b2-9f20-596847abf252 00:20:47.583 00:54:59 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:47.583 00:55:00 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 26915f27-0f7f-48ef-9ba8-3f86af638f38 00:20:47.842 00:55:00 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:48.100 00:55:00 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:48.100 00:55:00 -- host/perf.sh@114 -- # nvmftestfini 00:20:48.100 00:55:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:48.100 00:55:00 -- nvmf/common.sh@116 -- # sync 00:20:48.100 00:55:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:48.100 00:55:00 -- nvmf/common.sh@119 -- # set +e 00:20:48.100 00:55:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:48.100 00:55:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:48.100 rmmod nvme_tcp 00:20:48.100 rmmod nvme_fabrics 00:20:48.100 rmmod nvme_keyring 00:20:48.100 00:55:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:48.100 00:55:00 -- nvmf/common.sh@123 -- # set -e 00:20:48.100 00:55:00 -- nvmf/common.sh@124 -- # return 0 00:20:48.100 00:55:00 -- nvmf/common.sh@477 -- # '[' -n 93839 ']' 00:20:48.100 00:55:00 -- nvmf/common.sh@478 -- # killprocess 93839 00:20:48.100 00:55:00 -- common/autotest_common.sh@936 -- # '[' -z 93839 ']' 00:20:48.100 00:55:00 -- common/autotest_common.sh@940 -- # kill -0 93839 00:20:48.100 00:55:00 -- common/autotest_common.sh@941 -- # uname 00:20:48.100 00:55:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.100 00:55:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93839 00:20:48.100 killing process with pid 93839 00:20:48.100 00:55:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.100 00:55:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.100 00:55:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93839' 00:20:48.100 00:55:00 -- common/autotest_common.sh@955 -- # kill 93839 00:20:48.100 00:55:00 -- common/autotest_common.sh@960 -- # wait 93839 00:20:50.006 00:55:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:50.006 00:55:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:50.006 00:55:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:50.006 00:55:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.006 00:55:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:50.006 00:55:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.006 00:55:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.006 00:55:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.006 00:55:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:50.006 ************************************ 00:20:50.006 END TEST nvmf_perf 00:20:50.006 ************************************ 00:20:50.006 00:20:50.006 real 0m50.573s 00:20:50.006 user 3m9.959s 00:20:50.006 sys 0m10.231s 00:20:50.006 00:55:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:50.006 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:20:50.006 00:55:02 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:50.006 00:55:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:50.006 00:55:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:50.006 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:20:50.006 ************************************ 00:20:50.006 START TEST nvmf_fio_host 00:20:50.006 ************************************ 00:20:50.006 00:55:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:50.006 * Looking for test storage... 00:20:50.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:50.006 00:55:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:50.006 00:55:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:50.006 00:55:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:50.006 00:55:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:50.006 00:55:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:50.006 00:55:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:50.006 00:55:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:50.006 00:55:02 -- scripts/common.sh@335 -- # IFS=.-: 00:20:50.006 00:55:02 -- scripts/common.sh@335 -- # read -ra ver1 00:20:50.006 00:55:02 -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.006 00:55:02 -- scripts/common.sh@336 -- # read -ra ver2 00:20:50.006 00:55:02 -- scripts/common.sh@337 -- # local 'op=<' 00:20:50.006 00:55:02 -- scripts/common.sh@339 -- # ver1_l=2 00:20:50.006 00:55:02 -- scripts/common.sh@340 -- # ver2_l=1 00:20:50.006 00:55:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:50.006 00:55:02 -- scripts/common.sh@343 -- # case "$op" in 00:20:50.006 00:55:02 -- scripts/common.sh@344 -- # : 1 00:20:50.006 00:55:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:50.006 00:55:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.006 00:55:02 -- scripts/common.sh@364 -- # decimal 1 00:20:50.006 00:55:02 -- scripts/common.sh@352 -- # local d=1 00:20:50.006 00:55:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.006 00:55:02 -- scripts/common.sh@354 -- # echo 1 00:20:50.006 00:55:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:50.006 00:55:02 -- scripts/common.sh@365 -- # decimal 2 00:20:50.006 00:55:02 -- scripts/common.sh@352 -- # local d=2 00:20:50.006 00:55:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.006 00:55:02 -- scripts/common.sh@354 -- # echo 2 00:20:50.006 00:55:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:50.006 00:55:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.006 00:55:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:50.006 00:55:02 -- scripts/common.sh@367 -- # return 0 00:20:50.006 00:55:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.006 00:55:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.006 --rc genhtml_branch_coverage=1 00:20:50.006 --rc genhtml_function_coverage=1 00:20:50.006 --rc genhtml_legend=1 00:20:50.006 --rc geninfo_all_blocks=1 00:20:50.006 --rc geninfo_unexecuted_blocks=1 00:20:50.006 00:20:50.006 ' 00:20:50.006 00:55:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.006 --rc genhtml_branch_coverage=1 00:20:50.006 --rc genhtml_function_coverage=1 00:20:50.006 --rc genhtml_legend=1 00:20:50.006 --rc geninfo_all_blocks=1 00:20:50.006 --rc geninfo_unexecuted_blocks=1 00:20:50.006 00:20:50.006 ' 00:20:50.006 00:55:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.006 --rc genhtml_branch_coverage=1 00:20:50.006 --rc genhtml_function_coverage=1 00:20:50.006 --rc genhtml_legend=1 00:20:50.006 --rc geninfo_all_blocks=1 00:20:50.006 --rc geninfo_unexecuted_blocks=1 00:20:50.006 00:20:50.006 ' 00:20:50.006 00:55:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.006 --rc genhtml_branch_coverage=1 00:20:50.006 --rc genhtml_function_coverage=1 00:20:50.006 --rc genhtml_legend=1 00:20:50.006 --rc geninfo_all_blocks=1 00:20:50.006 --rc geninfo_unexecuted_blocks=1 00:20:50.006 00:20:50.006 ' 00:20:50.006 00:55:02 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.006 00:55:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.006 00:55:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.006 00:55:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.006 00:55:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.006 00:55:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- paths/export.sh@5 -- # export PATH 00:20:50.007 00:55:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.007 00:55:02 -- nvmf/common.sh@7 -- # uname -s 00:20:50.007 00:55:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.007 00:55:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.007 00:55:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.007 00:55:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.007 00:55:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.007 00:55:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.007 00:55:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.007 00:55:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.007 00:55:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.007 00:55:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.007 00:55:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:20:50.007 00:55:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:20:50.007 00:55:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.007 00:55:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.007 00:55:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.007 00:55:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.007 00:55:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.007 00:55:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.007 00:55:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.007 00:55:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- paths/export.sh@5 -- # export PATH 00:20:50.007 00:55:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.007 00:55:02 -- nvmf/common.sh@46 -- # : 0 00:20:50.007 00:55:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:50.007 00:55:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:50.007 00:55:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:50.007 00:55:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.007 00:55:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.007 00:55:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:50.007 00:55:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:50.007 00:55:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:50.007 00:55:02 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.007 00:55:02 -- host/fio.sh@14 -- # nvmftestinit 00:20:50.007 00:55:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:50.007 00:55:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.007 00:55:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:50.007 00:55:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:50.007 00:55:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:50.007 00:55:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.007 00:55:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.007 00:55:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.007 00:55:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:50.007 00:55:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:50.007 00:55:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:50.007 00:55:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:50.007 00:55:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:50.007 00:55:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:50.007 00:55:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.007 00:55:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.007 00:55:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:50.007 00:55:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:50.007 00:55:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.007 00:55:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.007 00:55:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.007 00:55:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.007 00:55:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.007 00:55:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.007 00:55:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.007 00:55:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.007 00:55:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:50.007 00:55:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:50.007 Cannot find device "nvmf_tgt_br" 00:20:50.007 00:55:02 -- nvmf/common.sh@154 -- # true 00:20:50.007 00:55:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.007 Cannot find device "nvmf_tgt_br2" 00:20:50.007 00:55:02 -- nvmf/common.sh@155 -- # true 00:20:50.007 00:55:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:50.007 00:55:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:50.007 Cannot find device "nvmf_tgt_br" 00:20:50.007 00:55:02 -- nvmf/common.sh@157 -- # true 00:20:50.007 00:55:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:50.266 Cannot find device "nvmf_tgt_br2" 00:20:50.266 00:55:02 -- nvmf/common.sh@158 -- # true 00:20:50.266 00:55:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:50.266 00:55:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:50.266 00:55:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.266 00:55:02 -- nvmf/common.sh@161 -- # true 00:20:50.266 00:55:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.266 00:55:02 -- nvmf/common.sh@162 -- # true 00:20:50.266 00:55:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.266 00:55:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.266 00:55:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.266 00:55:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.266 00:55:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.266 00:55:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:50.266 00:55:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:50.266 00:55:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:50.266 00:55:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:50.266 00:55:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:50.266 00:55:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:50.266 00:55:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:50.266 00:55:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:50.266 00:55:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.266 00:55:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.266 00:55:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.266 00:55:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:50.266 00:55:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:50.266 00:55:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.266 00:55:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.266 00:55:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.266 00:55:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.266 00:55:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.266 00:55:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:50.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:50.266 00:20:50.266 --- 10.0.0.2 ping statistics --- 00:20:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.266 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:50.266 00:55:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:50.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:50.266 00:20:50.266 --- 10.0.0.3 ping statistics --- 00:20:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.266 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:50.266 00:55:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:50.266 00:20:50.266 --- 10.0.0.1 ping statistics --- 00:20:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.266 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:50.266 00:55:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.266 00:55:02 -- nvmf/common.sh@421 -- # return 0 00:20:50.266 00:55:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:50.266 00:55:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.266 00:55:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:50.266 00:55:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:50.266 00:55:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.266 00:55:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:50.266 00:55:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:50.266 00:55:02 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:50.266 00:55:02 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:50.266 00:55:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.266 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:20:50.525 00:55:02 -- host/fio.sh@24 -- # nvmfpid=94819 00:20:50.525 00:55:02 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.525 00:55:02 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:50.525 00:55:02 -- host/fio.sh@28 -- # waitforlisten 94819 00:20:50.525 00:55:02 -- common/autotest_common.sh@829 -- # '[' -z 94819 ']' 00:20:50.525 00:55:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.525 00:55:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.525 00:55:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.525 00:55:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.525 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:20:50.525 [2024-12-03 00:55:02.839570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:50.525 [2024-12-03 00:55:02.839659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.525 [2024-12-03 00:55:02.979690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.784 [2024-12-03 00:55:03.058587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:50.784 [2024-12-03 00:55:03.058737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.784 [2024-12-03 00:55:03.058753] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.784 [2024-12-03 00:55:03.058761] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.784 [2024-12-03 00:55:03.059637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.784 [2024-12-03 00:55:03.059734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.784 [2024-12-03 00:55:03.059837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.784 [2024-12-03 00:55:03.059843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.351 00:55:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.351 00:55:03 -- common/autotest_common.sh@862 -- # return 0 00:20:51.351 00:55:03 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:51.610 [2024-12-03 00:55:04.048506] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.610 00:55:04 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:51.610 00:55:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.610 00:55:04 -- common/autotest_common.sh@10 -- # set +x 00:20:51.867 00:55:04 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:51.867 Malloc1 00:20:51.867 00:55:04 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.432 00:55:04 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:52.432 00:55:04 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:52.700 [2024-12-03 00:55:05.157619] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.700 00:55:05 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:52.961 00:55:05 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:52.961 00:55:05 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.961 00:55:05 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.961 00:55:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:52.961 00:55:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.961 00:55:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:52.961 00:55:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.961 00:55:05 -- common/autotest_common.sh@1330 -- # shift 00:20:52.961 00:55:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:52.961 00:55:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:52.961 00:55:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:52.961 00:55:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:52.961 00:55:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:52.961 00:55:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:52.961 00:55:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:52.961 00:55:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.219 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:53.219 fio-3.35 00:20:53.219 Starting 1 thread 00:20:55.749 00:20:55.749 test: (groupid=0, jobs=1): err= 0: pid=94945: Tue Dec 3 00:55:07 2024 00:20:55.749 read: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(79.8MiB/2006msec) 00:20:55.749 slat (nsec): min=1756, max=372125, avg=2359.33, stdev=3494.66 00:20:55.749 clat (usec): min=3536, max=11164, avg=6660.30, stdev=594.41 00:20:55.749 lat (usec): min=3580, max=11166, avg=6662.66, stdev=594.44 00:20:55.749 clat percentiles (usec): 00:20:55.749 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:20:55.749 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6783], 00:20:55.749 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7701], 00:20:55.749 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[ 9896], 99.95th=[10814], 00:20:55.749 | 99.99th=[11076] 00:20:55.749 bw ( KiB/s): min=39232, max=41512, per=99.96%, avg=40718.00, stdev=1036.99, samples=4 00:20:55.749 iops : min= 9808, max=10378, avg=10179.50, stdev=259.25, samples=4 00:20:55.749 write: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(79.9MiB/2006msec); 0 zone resets 00:20:55.749 slat (nsec): min=1893, max=252035, avg=2515.37, stdev=2433.68 00:20:55.749 clat (usec): min=2543, max=11120, avg=5846.05, stdev=502.67 00:20:55.749 lat (usec): min=2557, max=11122, avg=5848.57, stdev=502.70 00:20:55.749 clat percentiles (usec): 00:20:55.749 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5473], 00:20:55.749 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:20:55.749 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6456], 95.00th=[ 6652], 00:20:55.749 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 8848], 99.95th=[10159], 00:20:55.749 | 99.99th=[11076] 00:20:55.749 bw ( KiB/s): min=39704, max=41696, per=100.00%, avg=40782.00, stdev=838.90, samples=4 00:20:55.749 iops : min= 9926, max=10424, avg=10195.50, stdev=209.73, samples=4 00:20:55.749 lat (msec) : 4=0.06%, 10=99.87%, 20=0.08% 00:20:55.749 cpu : usr=64.99%, sys=25.29%, ctx=6, majf=0, minf=5 00:20:55.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:55.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.749 issued rwts: total=20429,20448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.749 00:20:55.749 Run status group 0 (all jobs): 00:20:55.750 READ: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=79.8MiB (83.7MB), run=2006-2006msec 00:20:55.750 WRITE: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.8MB), run=2006-2006msec 00:20:55.750 00:55:07 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:55.750 00:55:07 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:55.750 00:55:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:55.750 00:55:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.750 00:55:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:55.750 00:55:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:55.750 00:55:07 -- common/autotest_common.sh@1330 -- # shift 00:20:55.750 00:55:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:55.750 00:55:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:55.750 00:55:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:55.750 00:55:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:55.750 00:55:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:55.750 00:55:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:55.750 00:55:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:55.750 00:55:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:55.750 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:55.750 fio-3.35 00:20:55.750 Starting 1 thread 00:20:58.326 00:20:58.326 test: (groupid=0, jobs=1): err= 0: pid=94988: Tue Dec 3 00:55:10 2024 00:20:58.326 read: IOPS=8652, BW=135MiB/s (142MB/s)(271MiB/2008msec) 00:20:58.326 slat (usec): min=2, max=101, avg= 3.45, stdev= 2.53 00:20:58.326 clat (usec): min=2124, max=17106, avg=8762.06, stdev=2144.59 00:20:58.326 lat (usec): min=2127, max=17111, avg=8765.51, stdev=2144.75 00:20:58.326 clat percentiles (usec): 00:20:58.326 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6915], 00:20:58.326 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9241], 00:20:58.326 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11338], 95.00th=[12518], 00:20:58.326 | 99.00th=[14877], 99.50th=[15533], 99.90th=[16712], 99.95th=[16909], 00:20:58.326 | 99.99th=[16909] 00:20:58.326 bw ( KiB/s): min=66976, max=76704, per=52.25%, avg=72336.00, stdev=4347.02, samples=4 00:20:58.326 iops : min= 4186, max= 4794, avg=4521.00, stdev=271.69, samples=4 00:20:58.326 write: IOPS=5231, BW=81.7MiB/s (85.7MB/s)(147MiB/1801msec); 0 zone resets 00:20:58.326 slat (usec): min=29, max=207, avg=34.62, stdev= 8.89 00:20:58.326 clat (usec): min=2272, max=17111, avg=10375.54, stdev=1825.60 00:20:58.326 lat (usec): min=2303, max=17169, avg=10410.16, stdev=1827.38 00:20:58.326 clat percentiles (usec): 00:20:58.326 | 1.00th=[ 6652], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 8848], 00:20:58.326 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:20:58.326 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12780], 95.00th=[13698], 00:20:58.326 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16581], 99.95th=[16909], 00:20:58.326 | 99.99th=[17171] 00:20:58.326 bw ( KiB/s): min=68992, max=81344, per=90.05%, avg=75376.00, stdev=5708.66, samples=4 00:20:58.326 iops : min= 4312, max= 5084, avg=4711.00, stdev=356.79, samples=4 00:20:58.326 lat (msec) : 4=0.39%, 10=63.20%, 20=36.42% 00:20:58.326 cpu : usr=67.02%, sys=21.13%, ctx=18, majf=0, minf=2 00:20:58.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:58.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.326 issued rwts: total=17374,9422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.326 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.326 00:20:58.326 Run status group 0 (all jobs): 00:20:58.326 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=271MiB (285MB), run=2008-2008msec 00:20:58.326 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=147MiB (154MB), run=1801-1801msec 00:20:58.326 00:55:10 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:58.326 00:55:10 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:58.326 00:55:10 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:58.326 00:55:10 -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:58.326 00:55:10 -- common/autotest_common.sh@1508 -- # bdfs=() 00:20:58.326 00:55:10 -- common/autotest_common.sh@1508 -- # local bdfs 00:20:58.326 00:55:10 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:58.326 00:55:10 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:58.326 00:55:10 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:20:58.326 00:55:10 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:20:58.326 00:55:10 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:58.326 00:55:10 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:58.585 Nvme0n1 00:20:58.585 00:55:10 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:58.843 00:55:11 -- host/fio.sh@53 -- # ls_guid=f812bcdd-8d30-4daa-95bb-d3c2167aa09e 00:20:58.843 00:55:11 -- host/fio.sh@54 -- # get_lvs_free_mb f812bcdd-8d30-4daa-95bb-d3c2167aa09e 00:20:58.843 00:55:11 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f812bcdd-8d30-4daa-95bb-d3c2167aa09e 00:20:58.843 00:55:11 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:58.843 00:55:11 -- common/autotest_common.sh@1355 -- # local fc 00:20:58.843 00:55:11 -- common/autotest_common.sh@1356 -- # local cs 00:20:58.843 00:55:11 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:59.100 00:55:11 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:59.100 { 00:20:59.101 "base_bdev": "Nvme0n1", 00:20:59.101 "block_size": 4096, 00:20:59.101 "cluster_size": 1073741824, 00:20:59.101 "free_clusters": 4, 00:20:59.101 "name": "lvs_0", 00:20:59.101 "total_data_clusters": 4, 00:20:59.101 "uuid": "f812bcdd-8d30-4daa-95bb-d3c2167aa09e" 00:20:59.101 } 00:20:59.101 ]' 00:20:59.101 00:55:11 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f812bcdd-8d30-4daa-95bb-d3c2167aa09e") .free_clusters' 00:20:59.101 00:55:11 -- common/autotest_common.sh@1358 -- # fc=4 00:20:59.101 00:55:11 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f812bcdd-8d30-4daa-95bb-d3c2167aa09e") .cluster_size' 00:20:59.101 4096 00:20:59.101 00:55:11 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:20:59.101 00:55:11 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:20:59.101 00:55:11 -- common/autotest_common.sh@1363 -- # echo 4096 00:20:59.101 00:55:11 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:59.358 54ae907f-1780-4605-85bd-39f441d4aa1a 00:20:59.616 00:55:11 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:59.874 00:55:12 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:59.874 00:55:12 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:00.132 00:55:12 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.132 00:55:12 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.132 00:55:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:00.132 00:55:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.132 00:55:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:00.132 00:55:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.132 00:55:12 -- common/autotest_common.sh@1330 -- # shift 00:21:00.132 00:55:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:00.132 00:55:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.132 00:55:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.132 00:55:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:00.132 00:55:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:00.391 00:55:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:00.391 00:55:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:00.391 00:55:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.391 00:55:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:00.391 00:55:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.391 00:55:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:00.391 00:55:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:00.391 00:55:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:00.391 00:55:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:00.391 00:55:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.391 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:00.391 fio-3.35 00:21:00.391 Starting 1 thread 00:21:02.923 00:21:02.923 test: (groupid=0, jobs=1): err= 0: pid=95146: Tue Dec 3 00:55:15 2024 00:21:02.923 read: IOPS=6318, BW=24.7MiB/s (25.9MB/s)(49.6MiB/2009msec) 00:21:02.923 slat (nsec): min=1816, max=337535, avg=2919.25, stdev=4482.12 00:21:02.923 clat (usec): min=4768, max=18598, avg=10662.36, stdev=1022.94 00:21:02.923 lat (usec): min=4778, max=18601, avg=10665.28, stdev=1022.71 00:21:02.923 clat percentiles (usec): 00:21:02.923 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:21:02.923 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:21:02.923 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:21:02.923 | 99.00th=[13173], 99.50th=[13566], 99.90th=[16712], 99.95th=[17957], 00:21:02.923 | 99.99th=[18482] 00:21:02.923 bw ( KiB/s): min=23960, max=26008, per=99.97%, avg=25266.00, stdev=897.27, samples=4 00:21:02.923 iops : min= 5990, max= 6502, avg=6316.50, stdev=224.32, samples=4 00:21:02.923 write: IOPS=6314, BW=24.7MiB/s (25.9MB/s)(49.6MiB/2009msec); 0 zone resets 00:21:02.923 slat (nsec): min=1903, max=231062, avg=3033.40, stdev=3442.15 00:21:02.923 clat (usec): min=2425, max=18061, avg=9470.93, stdev=900.95 00:21:02.923 lat (usec): min=2437, max=18063, avg=9473.96, stdev=900.79 00:21:02.923 clat percentiles (usec): 00:21:02.923 | 1.00th=[ 7439], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:21:02.923 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:21:02.923 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:21:02.923 | 99.00th=[11469], 99.50th=[11731], 99.90th=[16057], 99.95th=[17171], 00:21:02.923 | 99.99th=[17957] 00:21:02.923 bw ( KiB/s): min=24984, max=25472, per=99.93%, avg=25238.00, stdev=214.40, samples=4 00:21:02.923 iops : min= 6246, max= 6368, avg=6309.50, stdev=53.60, samples=4 00:21:02.923 lat (msec) : 4=0.04%, 10=49.89%, 20=50.07% 00:21:02.923 cpu : usr=69.17%, sys=23.61%, ctx=6, majf=0, minf=5 00:21:02.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:02.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.923 issued rwts: total=12694,12685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.923 00:21:02.923 Run status group 0 (all jobs): 00:21:02.923 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.6MiB (52.0MB), run=2009-2009msec 00:21:02.923 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.6MiB (52.0MB), run=2009-2009msec 00:21:02.923 00:55:15 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:02.923 00:55:15 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:03.182 00:55:15 -- host/fio.sh@64 -- # ls_nested_guid=f04fa0d8-1a5b-4170-b081-f662300119fe 00:21:03.182 00:55:15 -- host/fio.sh@65 -- # get_lvs_free_mb f04fa0d8-1a5b-4170-b081-f662300119fe 00:21:03.182 00:55:15 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f04fa0d8-1a5b-4170-b081-f662300119fe 00:21:03.182 00:55:15 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:03.182 00:55:15 -- common/autotest_common.sh@1355 -- # local fc 00:21:03.182 00:55:15 -- common/autotest_common.sh@1356 -- # local cs 00:21:03.182 00:55:15 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.439 00:55:15 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:03.439 { 00:21:03.439 "base_bdev": "Nvme0n1", 00:21:03.439 "block_size": 4096, 00:21:03.439 "cluster_size": 1073741824, 00:21:03.439 "free_clusters": 0, 00:21:03.439 "name": "lvs_0", 00:21:03.439 "total_data_clusters": 4, 00:21:03.439 "uuid": "f812bcdd-8d30-4daa-95bb-d3c2167aa09e" 00:21:03.439 }, 00:21:03.439 { 00:21:03.439 "base_bdev": "54ae907f-1780-4605-85bd-39f441d4aa1a", 00:21:03.439 "block_size": 4096, 00:21:03.439 "cluster_size": 4194304, 00:21:03.439 "free_clusters": 1022, 00:21:03.439 "name": "lvs_n_0", 00:21:03.439 "total_data_clusters": 1022, 00:21:03.439 "uuid": "f04fa0d8-1a5b-4170-b081-f662300119fe" 00:21:03.439 } 00:21:03.439 ]' 00:21:03.439 00:55:15 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f04fa0d8-1a5b-4170-b081-f662300119fe") .free_clusters' 00:21:03.698 00:55:15 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:03.698 00:55:15 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f04fa0d8-1a5b-4170-b081-f662300119fe") .cluster_size' 00:21:03.698 4088 00:21:03.698 00:55:16 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:03.698 00:55:16 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:03.698 00:55:16 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:03.698 00:55:16 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:03.956 63b8f26e-a3fb-4d38-80ff-d36e2f13c178 00:21:03.956 00:55:16 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:04.214 00:55:16 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:04.474 00:55:16 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:04.474 00:55:16 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.474 00:55:16 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.474 00:55:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:04.474 00:55:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.474 00:55:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:04.474 00:55:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.474 00:55:16 -- common/autotest_common.sh@1330 -- # shift 00:21:04.474 00:55:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:04.474 00:55:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.474 00:55:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.474 00:55:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:04.474 00:55:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:04.734 00:55:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:04.734 00:55:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:04.734 00:55:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.734 00:55:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.734 00:55:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:04.734 00:55:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:04.734 00:55:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:04.734 00:55:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:04.734 00:55:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:04.734 00:55:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.734 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:04.734 fio-3.35 00:21:04.734 Starting 1 thread 00:21:07.269 00:21:07.269 test: (groupid=0, jobs=1): err= 0: pid=95266: Tue Dec 3 00:55:19 2024 00:21:07.269 read: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2010msec) 00:21:07.269 slat (nsec): min=1743, max=346126, avg=2916.43, stdev=4690.57 00:21:07.269 clat (usec): min=4779, max=19444, avg=11711.01, stdev=1121.23 00:21:07.269 lat (usec): min=4788, max=19447, avg=11713.93, stdev=1121.09 00:21:07.269 clat percentiles (usec): 00:21:07.269 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:21:07.269 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:21:07.269 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13566], 00:21:07.269 | 99.00th=[14353], 99.50th=[14877], 99.90th=[17957], 99.95th=[19268], 00:21:07.269 | 99.99th=[19530] 00:21:07.269 bw ( KiB/s): min=22312, max=23784, per=99.93%, avg=23302.00, stdev=672.90, samples=4 00:21:07.269 iops : min= 5578, max= 5946, avg=5825.50, stdev=168.23, samples=4 00:21:07.269 write: IOPS=5817, BW=22.7MiB/s (23.8MB/s)(45.7MiB/2010msec); 0 zone resets 00:21:07.269 slat (nsec): min=1849, max=285892, avg=3031.89, stdev=3783.93 00:21:07.269 clat (usec): min=2525, max=19178, avg=10201.35, stdev=960.53 00:21:07.269 lat (usec): min=2537, max=19181, avg=10204.38, stdev=960.38 00:21:07.269 clat percentiles (usec): 00:21:07.269 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:21:07.269 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:21:07.269 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:21:07.269 | 99.00th=[12387], 99.50th=[12649], 99.90th=[16581], 99.95th=[17695], 00:21:07.269 | 99.99th=[17957] 00:21:07.269 bw ( KiB/s): min=23120, max=23408, per=99.98%, avg=23268.00, stdev=134.58, samples=4 00:21:07.269 iops : min= 5780, max= 5852, avg=5817.00, stdev=33.65, samples=4 00:21:07.269 lat (msec) : 4=0.04%, 10=22.64%, 20=77.32% 00:21:07.269 cpu : usr=72.57%, sys=20.66%, ctx=4, majf=0, minf=5 00:21:07.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:07.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.269 issued rwts: total=11718,11694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.269 00:21:07.269 Run status group 0 (all jobs): 00:21:07.269 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2010-2010msec 00:21:07.269 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.7MiB (47.9MB), run=2010-2010msec 00:21:07.269 00:55:19 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:07.269 00:55:19 -- host/fio.sh@74 -- # sync 00:21:07.269 00:55:19 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:07.528 00:55:20 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:07.787 00:55:20 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:08.046 00:55:20 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:08.304 00:55:20 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:09.242 00:55:21 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:09.242 00:55:21 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:09.242 00:55:21 -- host/fio.sh@86 -- # nvmftestfini 00:21:09.242 00:55:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:09.242 00:55:21 -- nvmf/common.sh@116 -- # sync 00:21:09.242 00:55:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:09.242 00:55:21 -- nvmf/common.sh@119 -- # set +e 00:21:09.242 00:55:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:09.242 00:55:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:09.242 rmmod nvme_tcp 00:21:09.242 rmmod nvme_fabrics 00:21:09.242 rmmod nvme_keyring 00:21:09.242 00:55:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:09.242 00:55:21 -- nvmf/common.sh@123 -- # set -e 00:21:09.242 00:55:21 -- nvmf/common.sh@124 -- # return 0 00:21:09.242 00:55:21 -- nvmf/common.sh@477 -- # '[' -n 94819 ']' 00:21:09.242 00:55:21 -- nvmf/common.sh@478 -- # killprocess 94819 00:21:09.242 00:55:21 -- common/autotest_common.sh@936 -- # '[' -z 94819 ']' 00:21:09.242 00:55:21 -- common/autotest_common.sh@940 -- # kill -0 94819 00:21:09.242 00:55:21 -- common/autotest_common.sh@941 -- # uname 00:21:09.242 00:55:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.242 00:55:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94819 00:21:09.242 killing process with pid 94819 00:21:09.242 00:55:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:09.242 00:55:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:09.242 00:55:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94819' 00:21:09.242 00:55:21 -- common/autotest_common.sh@955 -- # kill 94819 00:21:09.242 00:55:21 -- common/autotest_common.sh@960 -- # wait 94819 00:21:09.810 00:55:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:09.810 00:55:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:09.810 00:55:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:09.810 00:55:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.810 00:55:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:09.810 00:55:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.810 00:55:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.810 00:55:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.810 00:55:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:09.810 ************************************ 00:21:09.810 END TEST nvmf_fio_host 00:21:09.810 ************************************ 00:21:09.810 00:21:09.810 real 0m19.797s 00:21:09.810 user 1m26.024s 00:21:09.810 sys 0m4.508s 00:21:09.810 00:55:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:09.810 00:55:22 -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 00:55:22 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:09.810 00:55:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:09.810 00:55:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:09.810 00:55:22 -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 ************************************ 00:21:09.810 START TEST nvmf_failover 00:21:09.810 ************************************ 00:21:09.810 00:55:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:09.810 * Looking for test storage... 00:21:09.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:09.810 00:55:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:09.810 00:55:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:09.810 00:55:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:09.810 00:55:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:09.810 00:55:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:09.810 00:55:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:09.810 00:55:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:09.810 00:55:22 -- scripts/common.sh@335 -- # IFS=.-: 00:21:09.810 00:55:22 -- scripts/common.sh@335 -- # read -ra ver1 00:21:09.810 00:55:22 -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.810 00:55:22 -- scripts/common.sh@336 -- # read -ra ver2 00:21:09.810 00:55:22 -- scripts/common.sh@337 -- # local 'op=<' 00:21:09.810 00:55:22 -- scripts/common.sh@339 -- # ver1_l=2 00:21:09.810 00:55:22 -- scripts/common.sh@340 -- # ver2_l=1 00:21:09.810 00:55:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:09.810 00:55:22 -- scripts/common.sh@343 -- # case "$op" in 00:21:09.810 00:55:22 -- scripts/common.sh@344 -- # : 1 00:21:09.810 00:55:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:09.810 00:55:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.810 00:55:22 -- scripts/common.sh@364 -- # decimal 1 00:21:09.810 00:55:22 -- scripts/common.sh@352 -- # local d=1 00:21:09.810 00:55:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.810 00:55:22 -- scripts/common.sh@354 -- # echo 1 00:21:09.810 00:55:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:09.810 00:55:22 -- scripts/common.sh@365 -- # decimal 2 00:21:09.810 00:55:22 -- scripts/common.sh@352 -- # local d=2 00:21:09.810 00:55:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.810 00:55:22 -- scripts/common.sh@354 -- # echo 2 00:21:09.810 00:55:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:09.810 00:55:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:09.810 00:55:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:09.810 00:55:22 -- scripts/common.sh@367 -- # return 0 00:21:09.810 00:55:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.810 00:55:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:09.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.810 --rc genhtml_branch_coverage=1 00:21:09.810 --rc genhtml_function_coverage=1 00:21:09.810 --rc genhtml_legend=1 00:21:09.810 --rc geninfo_all_blocks=1 00:21:09.810 --rc geninfo_unexecuted_blocks=1 00:21:09.810 00:21:09.810 ' 00:21:09.810 00:55:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:09.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.810 --rc genhtml_branch_coverage=1 00:21:09.810 --rc genhtml_function_coverage=1 00:21:09.810 --rc genhtml_legend=1 00:21:09.810 --rc geninfo_all_blocks=1 00:21:09.810 --rc geninfo_unexecuted_blocks=1 00:21:09.810 00:21:09.810 ' 00:21:09.810 00:55:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:09.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.810 --rc genhtml_branch_coverage=1 00:21:09.810 --rc genhtml_function_coverage=1 00:21:09.810 --rc genhtml_legend=1 00:21:09.810 --rc geninfo_all_blocks=1 00:21:09.811 --rc geninfo_unexecuted_blocks=1 00:21:09.811 00:21:09.811 ' 00:21:09.811 00:55:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:09.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.811 --rc genhtml_branch_coverage=1 00:21:09.811 --rc genhtml_function_coverage=1 00:21:09.811 --rc genhtml_legend=1 00:21:09.811 --rc geninfo_all_blocks=1 00:21:09.811 --rc geninfo_unexecuted_blocks=1 00:21:09.811 00:21:09.811 ' 00:21:09.811 00:55:22 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.811 00:55:22 -- nvmf/common.sh@7 -- # uname -s 00:21:09.811 00:55:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.811 00:55:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.811 00:55:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.811 00:55:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.811 00:55:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.811 00:55:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.811 00:55:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.811 00:55:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.811 00:55:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.811 00:55:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.811 00:55:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:21:09.811 00:55:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:21:09.811 00:55:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.811 00:55:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.811 00:55:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.811 00:55:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.811 00:55:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.811 00:55:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.811 00:55:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.811 00:55:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.811 00:55:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.811 00:55:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.811 00:55:22 -- paths/export.sh@5 -- # export PATH 00:21:09.811 00:55:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.811 00:55:22 -- nvmf/common.sh@46 -- # : 0 00:21:09.811 00:55:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:09.811 00:55:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:09.811 00:55:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:09.811 00:55:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.811 00:55:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.811 00:55:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:09.811 00:55:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:09.811 00:55:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:09.811 00:55:22 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.811 00:55:22 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.811 00:55:22 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.811 00:55:22 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.811 00:55:22 -- host/failover.sh@18 -- # nvmftestinit 00:21:09.811 00:55:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:09.811 00:55:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.811 00:55:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:09.811 00:55:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:09.811 00:55:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:09.811 00:55:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.811 00:55:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.811 00:55:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.811 00:55:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:09.811 00:55:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:09.811 00:55:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:09.811 00:55:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:09.811 00:55:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:09.811 00:55:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:09.811 00:55:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.811 00:55:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.811 00:55:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:09.811 00:55:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:09.811 00:55:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.811 00:55:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.811 00:55:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.811 00:55:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.811 00:55:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.811 00:55:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.811 00:55:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.811 00:55:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.811 00:55:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:09.811 00:55:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:09.811 Cannot find device "nvmf_tgt_br" 00:21:09.811 00:55:22 -- nvmf/common.sh@154 -- # true 00:21:09.811 00:55:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.070 Cannot find device "nvmf_tgt_br2" 00:21:10.070 00:55:22 -- nvmf/common.sh@155 -- # true 00:21:10.070 00:55:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:10.070 00:55:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:10.070 Cannot find device "nvmf_tgt_br" 00:21:10.070 00:55:22 -- nvmf/common.sh@157 -- # true 00:21:10.070 00:55:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:10.070 Cannot find device "nvmf_tgt_br2" 00:21:10.070 00:55:22 -- nvmf/common.sh@158 -- # true 00:21:10.070 00:55:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:10.070 00:55:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:10.070 00:55:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.070 00:55:22 -- nvmf/common.sh@161 -- # true 00:21:10.070 00:55:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.070 00:55:22 -- nvmf/common.sh@162 -- # true 00:21:10.070 00:55:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.070 00:55:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.070 00:55:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.070 00:55:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.070 00:55:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.070 00:55:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.070 00:55:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.070 00:55:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:10.070 00:55:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:10.070 00:55:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:10.070 00:55:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:10.070 00:55:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:10.070 00:55:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:10.070 00:55:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.070 00:55:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.070 00:55:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.070 00:55:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:10.070 00:55:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:10.070 00:55:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.070 00:55:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.070 00:55:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.070 00:55:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.070 00:55:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.070 00:55:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:10.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:21:10.070 00:21:10.070 --- 10.0.0.2 ping statistics --- 00:21:10.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.070 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:10.070 00:55:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:10.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:10.070 00:21:10.070 --- 10.0.0.3 ping statistics --- 00:21:10.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.070 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:10.070 00:55:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:21:10.070 00:21:10.070 --- 10.0.0.1 ping statistics --- 00:21:10.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.070 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:10.070 00:55:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.071 00:55:22 -- nvmf/common.sh@421 -- # return 0 00:21:10.071 00:55:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:10.071 00:55:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.071 00:55:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:10.071 00:55:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:10.071 00:55:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.071 00:55:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:10.071 00:55:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:10.330 00:55:22 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:10.330 00:55:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:10.330 00:55:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.330 00:55:22 -- common/autotest_common.sh@10 -- # set +x 00:21:10.330 00:55:22 -- nvmf/common.sh@469 -- # nvmfpid=95553 00:21:10.330 00:55:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:10.330 00:55:22 -- nvmf/common.sh@470 -- # waitforlisten 95553 00:21:10.330 00:55:22 -- common/autotest_common.sh@829 -- # '[' -z 95553 ']' 00:21:10.330 00:55:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.330 00:55:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.330 00:55:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.330 00:55:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.330 00:55:22 -- common/autotest_common.sh@10 -- # set +x 00:21:10.330 [2024-12-03 00:55:22.663125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:10.330 [2024-12-03 00:55:22.663385] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.330 [2024-12-03 00:55:22.806746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.588 [2024-12-03 00:55:22.900588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:10.588 [2024-12-03 00:55:22.900748] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.588 [2024-12-03 00:55:22.900764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.588 [2024-12-03 00:55:22.900776] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.588 [2024-12-03 00:55:22.900927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.588 [2024-12-03 00:55:22.901921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.589 [2024-12-03 00:55:22.901972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.521 00:55:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.521 00:55:23 -- common/autotest_common.sh@862 -- # return 0 00:21:11.521 00:55:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:11.521 00:55:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.521 00:55:23 -- common/autotest_common.sh@10 -- # set +x 00:21:11.521 00:55:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.521 00:55:23 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:11.521 [2024-12-03 00:55:23.978561] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.521 00:55:23 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:12.088 Malloc0 00:21:12.088 00:55:24 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.088 00:55:24 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.346 00:55:24 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.604 [2024-12-03 00:55:24.979222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.604 00:55:24 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:12.866 [2024-12-03 00:55:25.279505] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:12.866 00:55:25 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:13.130 [2024-12-03 00:55:25.539858] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:13.130 00:55:25 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:13.130 00:55:25 -- host/failover.sh@31 -- # bdevperf_pid=95664 00:21:13.130 00:55:25 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.130 00:55:25 -- host/failover.sh@34 -- # waitforlisten 95664 /var/tmp/bdevperf.sock 00:21:13.130 00:55:25 -- common/autotest_common.sh@829 -- # '[' -z 95664 ']' 00:21:13.130 00:55:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.130 00:55:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.130 00:55:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.130 00:55:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.130 00:55:25 -- common/autotest_common.sh@10 -- # set +x 00:21:14.063 00:55:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.063 00:55:26 -- common/autotest_common.sh@862 -- # return 0 00:21:14.063 00:55:26 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.321 NVMe0n1 00:21:14.321 00:55:26 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.888 00:21:14.888 00:55:27 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.888 00:55:27 -- host/failover.sh@39 -- # run_test_pid=95712 00:21:14.888 00:55:27 -- host/failover.sh@41 -- # sleep 1 00:21:15.825 00:55:28 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.085 [2024-12-03 00:55:28.422027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.085 [2024-12-03 00:55:28.422305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 [2024-12-03 00:55:28.422819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd8c90 is same with the state(5) to be set 00:21:16.086 00:55:28 -- host/failover.sh@45 -- # sleep 3 00:21:19.372 00:55:31 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.372 00:21:19.372 00:55:31 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.632 [2024-12-03 00:55:31.966251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.632 [2024-12-03 00:55:31.966404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 [2024-12-03 00:55:31.966734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda380 is same with the state(5) to be set 00:21:19.633 00:55:31 -- host/failover.sh@50 -- # sleep 3 00:21:22.920 00:55:34 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.920 [2024-12-03 00:55:35.241883] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.920 00:55:35 -- host/failover.sh@55 -- # sleep 1 00:21:23.854 00:55:36 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:24.113 [2024-12-03 00:55:36.523482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.113 [2024-12-03 00:55:36.523837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.523994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 [2024-12-03 00:55:36.524075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaa60 is same with the state(5) to be set 00:21:24.114 00:55:36 -- host/failover.sh@59 -- # wait 95712 00:21:30.687 0 00:21:30.687 00:55:42 -- host/failover.sh@61 -- # killprocess 95664 00:21:30.687 00:55:42 -- common/autotest_common.sh@936 -- # '[' -z 95664 ']' 00:21:30.687 00:55:42 -- common/autotest_common.sh@940 -- # kill -0 95664 00:21:30.687 00:55:42 -- common/autotest_common.sh@941 -- # uname 00:21:30.687 00:55:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.687 00:55:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95664 00:21:30.687 killing process with pid 95664 00:21:30.687 00:55:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:30.687 00:55:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:30.687 00:55:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95664' 00:21:30.687 00:55:42 -- common/autotest_common.sh@955 -- # kill 95664 00:21:30.687 00:55:42 -- common/autotest_common.sh@960 -- # wait 95664 00:21:30.687 00:55:42 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:30.687 [2024-12-03 00:55:25.596570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:30.687 [2024-12-03 00:55:25.596688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95664 ] 00:21:30.687 [2024-12-03 00:55:25.730286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.687 [2024-12-03 00:55:25.795385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.687 Running I/O for 15 seconds... 00:21:30.687 [2024-12-03 00:55:28.423071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.687 [2024-12-03 00:55:28.423393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.687 [2024-12-03 00:55:28.423502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.687 [2024-12-03 00:55:28.423516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.423887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.423977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.423990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.424016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.424042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.688 [2024-12-03 00:55:28.424346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.688 [2024-12-03 00:55:28.424360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.688 [2024-12-03 00:55:28.424373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.689 [2024-12-03 00:55:28.424431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.689 [2024-12-03 00:55:28.424568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.689 [2024-12-03 00:55:28.424638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.689 [2024-12-03 00:55:28.424666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.689 [2024-12-03 00:55:28.424693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.689 [2024-12-03 00:55:28.424720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.424969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.424984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.689 [2024-12-03 00:55:28.425236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.689 [2024-12-03 00:55:28.425249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.425967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.425982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.425994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.426009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.426021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.426035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.426054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.426069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.426081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.426095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.426108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.426123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.690 [2024-12-03 00:55:28.426152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.690 [2024-12-03 00:55:28.426167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.690 [2024-12-03 00:55:28.426180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.691 [2024-12-03 00:55:28.426510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.691 [2024-12-03 00:55:28.426844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b130 is same with the state(5) to be set 00:21:30.691 [2024-12-03 00:55:28.426873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.691 [2024-12-03 00:55:28.426884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.691 [2024-12-03 00:55:28.426900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7944 len:8 PRP1 0x0 PRP2 0x0 00:21:30.691 [2024-12-03 00:55:28.426913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.426967] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x74b130 was disconnected and freed. reset controller. 00:21:30.691 [2024-12-03 00:55:28.427002] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:30.691 [2024-12-03 00:55:28.427064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.691 [2024-12-03 00:55:28.427085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.427099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.691 [2024-12-03 00:55:28.427111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.427124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.691 [2024-12-03 00:55:28.427137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.427149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.691 [2024-12-03 00:55:28.427162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.691 [2024-12-03 00:55:28.427174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.692 [2024-12-03 00:55:28.429251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.692 [2024-12-03 00:55:28.429286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6cb0 (9): Bad file descriptor 00:21:30.692 [2024-12-03 00:55:28.451197] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.692 [2024-12-03 00:55:31.966848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.966907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.966932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.966947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.966987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.692 [2024-12-03 00:55:31.967370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.692 [2024-12-03 00:55:31.967661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.692 [2024-12-03 00:55:31.967722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.692 [2024-12-03 00:55:31.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.692 [2024-12-03 00:55:31.967749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.967774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.967811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.967851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.967878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.967904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.967930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.967956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.967982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.967995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.968119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.968146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.968171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.968222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.693 [2024-12-03 00:55:31.968248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.693 [2024-12-03 00:55:31.968427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.693 [2024-12-03 00:55:31.968442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.968968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.968982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.968995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.969046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.969136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.969162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.694 [2024-12-03 00:55:31.969272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.694 [2024-12-03 00:55:31.969298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.694 [2024-12-03 00:55:31.969311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.969946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.969985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.969997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.970028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.970055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.970087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.970114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.970161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.970197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.695 [2024-12-03 00:55:31.970224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.695 [2024-12-03 00:55:31.970238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.695 [2024-12-03 00:55:31.970250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:31.970503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725b10 is same with the state(5) to be set 00:21:30.696 [2024-12-03 00:55:31.970533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.696 [2024-12-03 00:55:31.970544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.696 [2024-12-03 00:55:31.970584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43024 len:8 PRP1 0x0 PRP2 0x0 00:21:30.696 [2024-12-03 00:55:31.970601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970664] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x725b10 was disconnected and freed. reset controller. 00:21:30.696 [2024-12-03 00:55:31.970681] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:30.696 [2024-12-03 00:55:31.970731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.696 [2024-12-03 00:55:31.970771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.696 [2024-12-03 00:55:31.970796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.696 [2024-12-03 00:55:31.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.696 [2024-12-03 00:55:31.970846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:31.970858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.696 [2024-12-03 00:55:31.972800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.696 [2024-12-03 00:55:31.972837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6cb0 (9): Bad file descriptor 00:21:30.696 [2024-12-03 00:55:31.991553] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.696 [2024-12-03 00:55:36.524184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.696 [2024-12-03 00:55:36.524588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.696 [2024-12-03 00:55:36.524613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.696 [2024-12-03 00:55:36.524626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.696 [2024-12-03 00:55:36.524638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.524717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.524815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.524861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.524966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.524979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.524992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.697 [2024-12-03 00:55:36.525261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.697 [2024-12-03 00:55:36.525551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.697 [2024-12-03 00:55:36.525564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.698 [2024-12-03 00:55:36.525722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.525975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.525989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.698 [2024-12-03 00:55:36.526204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.698 [2024-12-03 00:55:36.526230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.698 [2024-12-03 00:55:36.526245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.526962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.526975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.699 [2024-12-03 00:55:36.526987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.527002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.527014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.527028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.527053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.527065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.699 [2024-12-03 00:55:36.527079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.699 [2024-12-03 00:55:36.527091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.700 [2024-12-03 00:55:36.527593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.700 [2024-12-03 00:55:36.527807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd060 is same with the state(5) to be set 00:21:30.700 [2024-12-03 00:55:36.527836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.700 [2024-12-03 00:55:36.527847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.700 [2024-12-03 00:55:36.527857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72272 len:8 PRP1 0x0 PRP2 0x0 00:21:30.700 [2024-12-03 00:55:36.527870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.527933] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7cd060 was disconnected and freed. reset controller. 00:21:30.700 [2024-12-03 00:55:36.527951] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:30.700 [2024-12-03 00:55:36.528003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.700 [2024-12-03 00:55:36.528022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.700 [2024-12-03 00:55:36.528035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.701 [2024-12-03 00:55:36.528056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.701 [2024-12-03 00:55:36.528070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.701 [2024-12-03 00:55:36.528082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.701 [2024-12-03 00:55:36.528095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.701 [2024-12-03 00:55:36.528107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.701 [2024-12-03 00:55:36.528119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.701 [2024-12-03 00:55:36.528151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6cb0 (9): Bad file descriptor 00:21:30.701 [2024-12-03 00:55:36.530271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.701 [2024-12-03 00:55:36.554076] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.701 00:21:30.701 Latency(us) 00:21:30.701 [2024-12-03T00:55:43.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.701 [2024-12-03T00:55:43.216Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.701 Verification LBA range: start 0x0 length 0x4000 00:21:30.701 NVMe0n1 : 15.01 14933.25 58.33 271.15 0.00 8404.13 603.23 13941.29 00:21:30.701 [2024-12-03T00:55:43.216Z] =================================================================================================================== 00:21:30.701 [2024-12-03T00:55:43.216Z] Total : 14933.25 58.33 271.15 0.00 8404.13 603.23 13941.29 00:21:30.701 Received shutdown signal, test time was about 15.000000 seconds 00:21:30.701 00:21:30.701 Latency(us) 00:21:30.701 [2024-12-03T00:55:43.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.701 [2024-12-03T00:55:43.216Z] =================================================================================================================== 00:21:30.701 [2024-12-03T00:55:43.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.701 00:55:42 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:30.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.701 00:55:42 -- host/failover.sh@65 -- # count=3 00:21:30.701 00:55:42 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:30.701 00:55:42 -- host/failover.sh@73 -- # bdevperf_pid=95916 00:21:30.701 00:55:42 -- host/failover.sh@75 -- # waitforlisten 95916 /var/tmp/bdevperf.sock 00:21:30.701 00:55:42 -- common/autotest_common.sh@829 -- # '[' -z 95916 ']' 00:21:30.701 00:55:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.701 00:55:42 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:30.701 00:55:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.701 00:55:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.701 00:55:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.701 00:55:42 -- common/autotest_common.sh@10 -- # set +x 00:21:31.269 00:55:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.269 00:55:43 -- common/autotest_common.sh@862 -- # return 0 00:21:31.269 00:55:43 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.529 [2024-12-03 00:55:43.805224] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.529 00:55:43 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:31.529 [2024-12-03 00:55:44.013376] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:31.529 00:55:44 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.802 NVMe0n1 00:21:32.083 00:55:44 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.083 00:21:32.083 00:55:44 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.348 00:21:32.348 00:55:44 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.348 00:55:44 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:32.607 00:55:45 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.865 00:55:45 -- host/failover.sh@87 -- # sleep 3 00:21:36.151 00:55:48 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.151 00:55:48 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:36.151 00:55:48 -- host/failover.sh@90 -- # run_test_pid=96053 00:21:36.151 00:55:48 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.151 00:55:48 -- host/failover.sh@92 -- # wait 96053 00:21:37.551 0 00:21:37.551 00:55:49 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:37.551 [2024-12-03 00:55:42.632225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:37.551 [2024-12-03 00:55:42.632896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95916 ] 00:21:37.551 [2024-12-03 00:55:42.771165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.551 [2024-12-03 00:55:42.828580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.551 [2024-12-03 00:55:45.308353] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:37.551 [2024-12-03 00:55:45.308464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.551 [2024-12-03 00:55:45.308487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.551 [2024-12-03 00:55:45.308502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.551 [2024-12-03 00:55:45.308514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.551 [2024-12-03 00:55:45.308528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.551 [2024-12-03 00:55:45.308539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.551 [2024-12-03 00:55:45.308551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.551 [2024-12-03 00:55:45.308563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.551 [2024-12-03 00:55:45.308575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.551 [2024-12-03 00:55:45.308611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.551 [2024-12-03 00:55:45.308638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2098cb0 (9): Bad file descriptor 00:21:37.551 [2024-12-03 00:55:45.316441] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.551 Running I/O for 1 seconds... 00:21:37.551 00:21:37.551 Latency(us) 00:21:37.551 [2024-12-03T00:55:50.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.551 [2024-12-03T00:55:50.066Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:37.551 Verification LBA range: start 0x0 length 0x4000 00:21:37.551 NVMe0n1 : 1.01 15008.75 58.63 0.00 0.00 8494.72 1124.54 10187.87 00:21:37.551 [2024-12-03T00:55:50.066Z] =================================================================================================================== 00:21:37.551 [2024-12-03T00:55:50.066Z] Total : 15008.75 58.63 0.00 0.00 8494.72 1124.54 10187.87 00:21:37.551 00:55:49 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.551 00:55:49 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:37.551 00:55:50 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.810 00:55:50 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.810 00:55:50 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:38.378 00:55:50 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.378 00:55:50 -- host/failover.sh@101 -- # sleep 3 00:21:41.685 00:55:53 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.685 00:55:53 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:41.685 00:55:54 -- host/failover.sh@108 -- # killprocess 95916 00:21:41.685 00:55:54 -- common/autotest_common.sh@936 -- # '[' -z 95916 ']' 00:21:41.685 00:55:54 -- common/autotest_common.sh@940 -- # kill -0 95916 00:21:41.685 00:55:54 -- common/autotest_common.sh@941 -- # uname 00:21:41.685 00:55:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.685 00:55:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95916 00:21:41.685 killing process with pid 95916 00:21:41.685 00:55:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:41.685 00:55:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:41.685 00:55:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95916' 00:21:41.685 00:55:54 -- common/autotest_common.sh@955 -- # kill 95916 00:21:41.685 00:55:54 -- common/autotest_common.sh@960 -- # wait 95916 00:21:41.945 00:55:54 -- host/failover.sh@110 -- # sync 00:21:41.945 00:55:54 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.204 00:55:54 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:42.204 00:55:54 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:42.204 00:55:54 -- host/failover.sh@116 -- # nvmftestfini 00:21:42.204 00:55:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:42.204 00:55:54 -- nvmf/common.sh@116 -- # sync 00:21:42.204 00:55:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:42.204 00:55:54 -- nvmf/common.sh@119 -- # set +e 00:21:42.204 00:55:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:42.204 00:55:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:42.204 rmmod nvme_tcp 00:21:42.204 rmmod nvme_fabrics 00:21:42.204 rmmod nvme_keyring 00:21:42.204 00:55:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:42.204 00:55:54 -- nvmf/common.sh@123 -- # set -e 00:21:42.204 00:55:54 -- nvmf/common.sh@124 -- # return 0 00:21:42.204 00:55:54 -- nvmf/common.sh@477 -- # '[' -n 95553 ']' 00:21:42.204 00:55:54 -- nvmf/common.sh@478 -- # killprocess 95553 00:21:42.204 00:55:54 -- common/autotest_common.sh@936 -- # '[' -z 95553 ']' 00:21:42.204 00:55:54 -- common/autotest_common.sh@940 -- # kill -0 95553 00:21:42.204 00:55:54 -- common/autotest_common.sh@941 -- # uname 00:21:42.204 00:55:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:42.204 00:55:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95553 00:21:42.204 killing process with pid 95553 00:21:42.204 00:55:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:42.204 00:55:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:42.204 00:55:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95553' 00:21:42.204 00:55:54 -- common/autotest_common.sh@955 -- # kill 95553 00:21:42.204 00:55:54 -- common/autotest_common.sh@960 -- # wait 95553 00:21:42.786 00:55:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:42.786 00:55:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:42.786 00:55:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:42.786 00:55:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.786 00:55:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.786 00:55:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.786 00:55:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:42.786 00:21:42.786 real 0m32.944s 00:21:42.786 user 2m7.278s 00:21:42.786 sys 0m5.053s 00:21:42.786 00:55:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:42.786 00:55:55 -- common/autotest_common.sh@10 -- # set +x 00:21:42.786 ************************************ 00:21:42.786 END TEST nvmf_failover 00:21:42.786 ************************************ 00:21:42.786 00:55:55 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:42.786 00:55:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:42.786 00:55:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:42.786 00:55:55 -- common/autotest_common.sh@10 -- # set +x 00:21:42.786 ************************************ 00:21:42.786 START TEST nvmf_discovery 00:21:42.786 ************************************ 00:21:42.786 00:55:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:42.786 * Looking for test storage... 00:21:42.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:42.786 00:55:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:42.786 00:55:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:42.786 00:55:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:42.786 00:55:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:42.786 00:55:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:42.786 00:55:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:42.786 00:55:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:42.786 00:55:55 -- scripts/common.sh@335 -- # IFS=.-: 00:21:42.786 00:55:55 -- scripts/common.sh@335 -- # read -ra ver1 00:21:42.786 00:55:55 -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.786 00:55:55 -- scripts/common.sh@336 -- # read -ra ver2 00:21:42.786 00:55:55 -- scripts/common.sh@337 -- # local 'op=<' 00:21:42.786 00:55:55 -- scripts/common.sh@339 -- # ver1_l=2 00:21:42.786 00:55:55 -- scripts/common.sh@340 -- # ver2_l=1 00:21:42.786 00:55:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:42.786 00:55:55 -- scripts/common.sh@343 -- # case "$op" in 00:21:42.786 00:55:55 -- scripts/common.sh@344 -- # : 1 00:21:42.786 00:55:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:42.786 00:55:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.786 00:55:55 -- scripts/common.sh@364 -- # decimal 1 00:21:42.786 00:55:55 -- scripts/common.sh@352 -- # local d=1 00:21:42.786 00:55:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.786 00:55:55 -- scripts/common.sh@354 -- # echo 1 00:21:42.786 00:55:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:42.786 00:55:55 -- scripts/common.sh@365 -- # decimal 2 00:21:42.786 00:55:55 -- scripts/common.sh@352 -- # local d=2 00:21:42.786 00:55:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.786 00:55:55 -- scripts/common.sh@354 -- # echo 2 00:21:42.786 00:55:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:42.786 00:55:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:42.786 00:55:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:42.786 00:55:55 -- scripts/common.sh@367 -- # return 0 00:21:42.786 00:55:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.786 00:55:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.786 --rc genhtml_branch_coverage=1 00:21:42.786 --rc genhtml_function_coverage=1 00:21:42.786 --rc genhtml_legend=1 00:21:42.786 --rc geninfo_all_blocks=1 00:21:42.786 --rc geninfo_unexecuted_blocks=1 00:21:42.786 00:21:42.786 ' 00:21:42.786 00:55:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.786 --rc genhtml_branch_coverage=1 00:21:42.786 --rc genhtml_function_coverage=1 00:21:42.786 --rc genhtml_legend=1 00:21:42.786 --rc geninfo_all_blocks=1 00:21:42.786 --rc geninfo_unexecuted_blocks=1 00:21:42.786 00:21:42.786 ' 00:21:42.786 00:55:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.786 --rc genhtml_branch_coverage=1 00:21:42.786 --rc genhtml_function_coverage=1 00:21:42.786 --rc genhtml_legend=1 00:21:42.786 --rc geninfo_all_blocks=1 00:21:42.786 --rc geninfo_unexecuted_blocks=1 00:21:42.786 00:21:42.786 ' 00:21:42.786 00:55:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.786 --rc genhtml_branch_coverage=1 00:21:42.786 --rc genhtml_function_coverage=1 00:21:42.786 --rc genhtml_legend=1 00:21:42.786 --rc geninfo_all_blocks=1 00:21:42.786 --rc geninfo_unexecuted_blocks=1 00:21:42.786 00:21:42.786 ' 00:21:42.786 00:55:55 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.786 00:55:55 -- nvmf/common.sh@7 -- # uname -s 00:21:42.786 00:55:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.786 00:55:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.786 00:55:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.786 00:55:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.786 00:55:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.786 00:55:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.786 00:55:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.786 00:55:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.786 00:55:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.786 00:55:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:21:42.786 00:55:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:21:42.786 00:55:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.786 00:55:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.786 00:55:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.786 00:55:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.786 00:55:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.786 00:55:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.786 00:55:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.786 00:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.786 00:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.786 00:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.786 00:55:55 -- paths/export.sh@5 -- # export PATH 00:21:42.786 00:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.786 00:55:55 -- nvmf/common.sh@46 -- # : 0 00:21:42.786 00:55:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:42.786 00:55:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:42.786 00:55:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:42.786 00:55:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.786 00:55:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.786 00:55:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:42.786 00:55:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:42.786 00:55:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:42.786 00:55:55 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:42.786 00:55:55 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:42.786 00:55:55 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:42.786 00:55:55 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:42.786 00:55:55 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:42.786 00:55:55 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:42.786 00:55:55 -- host/discovery.sh@25 -- # nvmftestinit 00:21:42.786 00:55:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:42.786 00:55:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.786 00:55:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:42.786 00:55:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:42.786 00:55:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:42.786 00:55:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.786 00:55:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.786 00:55:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.786 00:55:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:42.786 00:55:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:42.786 00:55:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.786 00:55:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.786 00:55:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:42.786 00:55:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:42.786 00:55:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.786 00:55:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.786 00:55:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.786 00:55:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.786 00:55:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.786 00:55:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.786 00:55:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.786 00:55:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.786 00:55:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:43.044 00:55:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:43.044 Cannot find device "nvmf_tgt_br" 00:21:43.044 00:55:55 -- nvmf/common.sh@154 -- # true 00:21:43.044 00:55:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.044 Cannot find device "nvmf_tgt_br2" 00:21:43.044 00:55:55 -- nvmf/common.sh@155 -- # true 00:21:43.044 00:55:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:43.044 00:55:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:43.044 Cannot find device "nvmf_tgt_br" 00:21:43.044 00:55:55 -- nvmf/common.sh@157 -- # true 00:21:43.044 00:55:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:43.044 Cannot find device "nvmf_tgt_br2" 00:21:43.044 00:55:55 -- nvmf/common.sh@158 -- # true 00:21:43.044 00:55:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:43.044 00:55:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:43.044 00:55:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.044 00:55:55 -- nvmf/common.sh@161 -- # true 00:21:43.044 00:55:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.044 00:55:55 -- nvmf/common.sh@162 -- # true 00:21:43.044 00:55:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.044 00:55:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.044 00:55:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.044 00:55:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.044 00:55:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.044 00:55:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.044 00:55:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.044 00:55:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:43.044 00:55:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:43.044 00:55:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:43.044 00:55:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:43.044 00:55:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:43.044 00:55:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:43.044 00:55:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.044 00:55:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.044 00:55:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.044 00:55:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:43.044 00:55:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:43.044 00:55:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.044 00:55:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.302 00:55:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.302 00:55:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.302 00:55:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.302 00:55:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:43.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:43.302 00:21:43.302 --- 10.0.0.2 ping statistics --- 00:21:43.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.302 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:43.302 00:55:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:43.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:43.302 00:21:43.302 --- 10.0.0.3 ping statistics --- 00:21:43.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.302 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:43.302 00:55:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:43.302 00:21:43.302 --- 10.0.0.1 ping statistics --- 00:21:43.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.302 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:43.302 00:55:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.302 00:55:55 -- nvmf/common.sh@421 -- # return 0 00:21:43.302 00:55:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:43.302 00:55:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.302 00:55:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:43.302 00:55:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:43.303 00:55:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.303 00:55:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:43.303 00:55:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:43.303 00:55:55 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:43.303 00:55:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:43.303 00:55:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.303 00:55:55 -- common/autotest_common.sh@10 -- # set +x 00:21:43.303 00:55:55 -- nvmf/common.sh@469 -- # nvmfpid=96369 00:21:43.303 00:55:55 -- nvmf/common.sh@470 -- # waitforlisten 96369 00:21:43.303 00:55:55 -- common/autotest_common.sh@829 -- # '[' -z 96369 ']' 00:21:43.303 00:55:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.303 00:55:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:43.303 00:55:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.303 00:55:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.303 00:55:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.303 00:55:55 -- common/autotest_common.sh@10 -- # set +x 00:21:43.303 [2024-12-03 00:55:55.675567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:43.303 [2024-12-03 00:55:55.675644] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.303 [2024-12-03 00:55:55.800690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.561 [2024-12-03 00:55:55.875721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.561 [2024-12-03 00:55:55.875877] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.561 [2024-12-03 00:55:55.875893] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.561 [2024-12-03 00:55:55.875902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.561 [2024-12-03 00:55:55.875933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.129 00:55:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.129 00:55:56 -- common/autotest_common.sh@862 -- # return 0 00:21:44.129 00:55:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:44.129 00:55:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.129 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 00:55:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.388 00:55:56 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.388 00:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.388 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 [2024-12-03 00:55:56.687099] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.388 00:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.388 00:55:56 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:44.388 00:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.388 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 [2024-12-03 00:55:56.695286] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:44.388 00:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.388 00:55:56 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:44.388 00:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.388 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 null0 00:21:44.388 00:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.388 00:55:56 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:44.388 00:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.388 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 null1 00:21:44.388 00:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.388 00:55:56 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:44.388 00:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.388 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 00:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.388 00:55:56 -- host/discovery.sh@45 -- # hostpid=96419 00:21:44.388 00:55:56 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:44.388 00:55:56 -- host/discovery.sh@46 -- # waitforlisten 96419 /tmp/host.sock 00:21:44.388 00:55:56 -- common/autotest_common.sh@829 -- # '[' -z 96419 ']' 00:21:44.388 00:55:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:44.388 00:55:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.388 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:44.388 00:55:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:44.388 00:55:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.388 00:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.388 [2024-12-03 00:55:56.782073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:44.388 [2024-12-03 00:55:56.782173] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96419 ] 00:21:44.648 [2024-12-03 00:55:56.926241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.648 [2024-12-03 00:55:56.999291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:44.648 [2024-12-03 00:55:56.999471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.586 00:55:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.586 00:55:57 -- common/autotest_common.sh@862 -- # return 0 00:21:45.586 00:55:57 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.586 00:55:57 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:45.586 00:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.586 00:55:57 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:45.586 00:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.586 00:55:57 -- host/discovery.sh@72 -- # notify_id=0 00:21:45.586 00:55:57 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.586 00:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # xargs 00:21:45.586 00:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # sort 00:21:45.586 00:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.586 00:55:57 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:45.586 00:55:57 -- host/discovery.sh@79 -- # get_bdev_list 00:21:45.586 00:55:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.586 00:55:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.586 00:55:57 -- host/discovery.sh@55 -- # sort 00:21:45.586 00:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:57 -- host/discovery.sh@55 -- # xargs 00:21:45.586 00:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.586 00:55:57 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:45.586 00:55:57 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:45.586 00:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.586 00:55:57 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.586 00:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # sort 00:21:45.586 00:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:57 -- host/discovery.sh@59 -- # xargs 00:21:45.586 00:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.586 00:55:58 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:45.586 00:55:58 -- host/discovery.sh@83 -- # get_bdev_list 00:21:45.586 00:55:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.586 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.586 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 00:55:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.587 00:55:58 -- host/discovery.sh@55 -- # sort 00:21:45.587 00:55:58 -- host/discovery.sh@55 -- # xargs 00:21:45.587 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.587 00:55:58 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:45.587 00:55:58 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:45.587 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.587 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.587 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.587 00:55:58 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:45.587 00:55:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.587 00:55:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.587 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.587 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.587 00:55:58 -- host/discovery.sh@59 -- # xargs 00:21:45.587 00:55:58 -- host/discovery.sh@59 -- # sort 00:21:45.587 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.846 00:55:58 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:45.846 00:55:58 -- host/discovery.sh@87 -- # get_bdev_list 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.846 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.846 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # sort 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # xargs 00:21:45.846 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.846 00:55:58 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:45.846 00:55:58 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:45.846 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.846 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.846 [2024-12-03 00:55:58.171613] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.846 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.846 00:55:58 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:45.846 00:55:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.846 00:55:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.846 00:55:58 -- host/discovery.sh@59 -- # sort 00:21:45.846 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.846 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.846 00:55:58 -- host/discovery.sh@59 -- # xargs 00:21:45.846 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.846 00:55:58 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:45.846 00:55:58 -- host/discovery.sh@93 -- # get_bdev_list 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # xargs 00:21:45.846 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.846 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.846 00:55:58 -- host/discovery.sh@55 -- # sort 00:21:45.846 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.846 00:55:58 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:45.846 00:55:58 -- host/discovery.sh@94 -- # get_notification_count 00:21:45.846 00:55:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.846 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.846 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.846 00:55:58 -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.846 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.847 00:55:58 -- host/discovery.sh@74 -- # notification_count=0 00:21:45.847 00:55:58 -- host/discovery.sh@75 -- # notify_id=0 00:21:45.847 00:55:58 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:45.847 00:55:58 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:45.847 00:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.847 00:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.847 00:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.847 00:55:58 -- host/discovery.sh@100 -- # sleep 1 00:21:46.414 [2024-12-03 00:55:58.840189] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:46.414 [2024-12-03 00:55:58.840218] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:46.414 [2024-12-03 00:55:58.840235] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.414 [2024-12-03 00:55:58.926299] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:46.673 [2024-12-03 00:55:58.982094] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:46.673 [2024-12-03 00:55:58.982119] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:46.933 00:55:59 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:46.933 00:55:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.933 00:55:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.933 00:55:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.933 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:21:46.933 00:55:59 -- host/discovery.sh@59 -- # sort 00:21:46.933 00:55:59 -- host/discovery.sh@59 -- # xargs 00:21:46.933 00:55:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.933 00:55:59 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.933 00:55:59 -- host/discovery.sh@102 -- # get_bdev_list 00:21:46.933 00:55:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.933 00:55:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.933 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:21:46.933 00:55:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.933 00:55:59 -- host/discovery.sh@55 -- # sort 00:21:46.933 00:55:59 -- host/discovery.sh@55 -- # xargs 00:21:46.933 00:55:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:47.193 00:55:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.193 00:55:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.193 00:55:59 -- host/discovery.sh@63 -- # sort -n 00:21:47.193 00:55:59 -- host/discovery.sh@63 -- # xargs 00:21:47.193 00:55:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.193 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:21:47.193 00:55:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@104 -- # get_notification_count 00:21:47.193 00:55:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:47.193 00:55:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.193 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:21:47.193 00:55:59 -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.193 00:55:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@74 -- # notification_count=1 00:21:47.193 00:55:59 -- host/discovery.sh@75 -- # notify_id=1 00:21:47.193 00:55:59 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:47.193 00:55:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.193 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:21:47.193 00:55:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.193 00:55:59 -- host/discovery.sh@109 -- # sleep 1 00:21:48.130 00:56:00 -- host/discovery.sh@110 -- # get_bdev_list 00:21:48.130 00:56:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.130 00:56:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.130 00:56:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.130 00:56:00 -- common/autotest_common.sh@10 -- # set +x 00:21:48.130 00:56:00 -- host/discovery.sh@55 -- # sort 00:21:48.130 00:56:00 -- host/discovery.sh@55 -- # xargs 00:21:48.130 00:56:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.390 00:56:00 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.390 00:56:00 -- host/discovery.sh@111 -- # get_notification_count 00:21:48.390 00:56:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:48.390 00:56:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.390 00:56:00 -- common/autotest_common.sh@10 -- # set +x 00:21:48.390 00:56:00 -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.390 00:56:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.390 00:56:00 -- host/discovery.sh@74 -- # notification_count=1 00:21:48.390 00:56:00 -- host/discovery.sh@75 -- # notify_id=2 00:21:48.390 00:56:00 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:48.390 00:56:00 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:48.390 00:56:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.390 00:56:00 -- common/autotest_common.sh@10 -- # set +x 00:21:48.390 [2024-12-03 00:56:00.728845] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:48.390 [2024-12-03 00:56:00.729856] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.390 [2024-12-03 00:56:00.729887] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.390 00:56:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.390 00:56:00 -- host/discovery.sh@117 -- # sleep 1 00:21:48.390 [2024-12-03 00:56:00.815897] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:48.390 [2024-12-03 00:56:00.873081] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.390 [2024-12-03 00:56:00.873103] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.390 [2024-12-03 00:56:00.873109] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.324 00:56:01 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:49.324 00:56:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.324 00:56:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.324 00:56:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.324 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:49.324 00:56:01 -- host/discovery.sh@59 -- # sort 00:21:49.324 00:56:01 -- host/discovery.sh@59 -- # xargs 00:21:49.324 00:56:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.324 00:56:01 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.324 00:56:01 -- host/discovery.sh@119 -- # get_bdev_list 00:21:49.324 00:56:01 -- host/discovery.sh@55 -- # sort 00:21:49.324 00:56:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.324 00:56:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.324 00:56:01 -- host/discovery.sh@55 -- # xargs 00:21:49.324 00:56:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.324 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:49.324 00:56:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.582 00:56:01 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.582 00:56:01 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:49.582 00:56:01 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.582 00:56:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.582 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:49.582 00:56:01 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.582 00:56:01 -- host/discovery.sh@63 -- # sort -n 00:21:49.582 00:56:01 -- host/discovery.sh@63 -- # xargs 00:21:49.582 00:56:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.582 00:56:01 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:49.582 00:56:01 -- host/discovery.sh@121 -- # get_notification_count 00:21:49.582 00:56:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.582 00:56:01 -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.582 00:56:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.582 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:49.582 00:56:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.582 00:56:01 -- host/discovery.sh@74 -- # notification_count=0 00:21:49.582 00:56:01 -- host/discovery.sh@75 -- # notify_id=2 00:21:49.582 00:56:01 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:49.582 00:56:01 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.582 00:56:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.583 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:49.583 [2024-12-03 00:56:01.966117] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:49.583 [2024-12-03 00:56:01.966163] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.583 00:56:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.583 [2024-12-03 00:56:01.969925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.583 [2024-12-03 00:56:01.969956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.583 [2024-12-03 00:56:01.969975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.583 [2024-12-03 00:56:01.969984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.583 [2024-12-03 00:56:01.969992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.583 [2024-12-03 00:56:01.970000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.583 [2024-12-03 00:56:01.970009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.583 [2024-12-03 00:56:01.970016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.583 [2024-12-03 00:56:01.970024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 00:56:01 -- host/discovery.sh@127 -- # sleep 1 00:21:49.583 [2024-12-03 00:56:01.979893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:01.989909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:01.989993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:01.990032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:01.990047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:01.990057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:01.990071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:01.990083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:01.990091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:01.990100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:01.990114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:01.999954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:02.000022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.000060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.000074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:02.000083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:02.000096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:02.000108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:02.000116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:02.000124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:02.000136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:02.009998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:02.010075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.010114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.010129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:02.010138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:02.010168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:02.010227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:02.010239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:02.010247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:02.010261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:02.020046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:02.020122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.020162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.020177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:02.020186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:02.020199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:02.020221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:02.020230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:02.020238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:02.020250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:02.030091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:02.030166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.030206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.030220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:02.030229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:02.030243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:02.030263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:02.030272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:02.030280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:02.030293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:02.040133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:02.040198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.040235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.040248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:02.040258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:02.040270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:02.040291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:02.040300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:02.040308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:02.040319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:02.050174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.583 [2024-12-03 00:56:02.050239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.050277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.583 [2024-12-03 00:56:02.050290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecf570 with addr=10.0.0.2, port=4420 00:21:49.583 [2024-12-03 00:56:02.050300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecf570 is same with the state(5) to be set 00:21:49.583 [2024-12-03 00:56:02.050312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf570 (9): Bad file descriptor 00:21:49.583 [2024-12-03 00:56:02.050332] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.583 [2024-12-03 00:56:02.050342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.583 [2024-12-03 00:56:02.050349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.583 [2024-12-03 00:56:02.050361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.583 [2024-12-03 00:56:02.052332] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:49.583 [2024-12-03 00:56:02.052357] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:50.517 00:56:02 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:50.517 00:56:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.517 00:56:02 -- host/discovery.sh@59 -- # sort 00:21:50.517 00:56:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.517 00:56:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.517 00:56:02 -- host/discovery.sh@59 -- # xargs 00:21:50.517 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:21:50.517 00:56:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.517 00:56:03 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@129 -- # get_bdev_list 00:21:50.774 00:56:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.774 00:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.774 00:56:03 -- common/autotest_common.sh@10 -- # set +x 00:21:50.774 00:56:03 -- host/discovery.sh@55 -- # sort 00:21:50.774 00:56:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.774 00:56:03 -- host/discovery.sh@55 -- # xargs 00:21:50.774 00:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:50.774 00:56:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.774 00:56:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.774 00:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.774 00:56:03 -- common/autotest_common.sh@10 -- # set +x 00:21:50.774 00:56:03 -- host/discovery.sh@63 -- # sort -n 00:21:50.774 00:56:03 -- host/discovery.sh@63 -- # xargs 00:21:50.774 00:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@131 -- # get_notification_count 00:21:50.774 00:56:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.774 00:56:03 -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.774 00:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.774 00:56:03 -- common/autotest_common.sh@10 -- # set +x 00:21:50.774 00:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@74 -- # notification_count=0 00:21:50.774 00:56:03 -- host/discovery.sh@75 -- # notify_id=2 00:21:50.774 00:56:03 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:50.774 00:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.774 00:56:03 -- common/autotest_common.sh@10 -- # set +x 00:21:50.774 00:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.774 00:56:03 -- host/discovery.sh@135 -- # sleep 1 00:21:51.710 00:56:04 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:51.710 00:56:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.710 00:56:04 -- host/discovery.sh@59 -- # sort 00:21:51.710 00:56:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.710 00:56:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.710 00:56:04 -- host/discovery.sh@59 -- # xargs 00:21:51.710 00:56:04 -- common/autotest_common.sh@10 -- # set +x 00:21:51.710 00:56:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.969 00:56:04 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:51.969 00:56:04 -- host/discovery.sh@137 -- # get_bdev_list 00:21:51.969 00:56:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.969 00:56:04 -- host/discovery.sh@55 -- # xargs 00:21:51.969 00:56:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.969 00:56:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.969 00:56:04 -- host/discovery.sh@55 -- # sort 00:21:51.969 00:56:04 -- common/autotest_common.sh@10 -- # set +x 00:21:51.969 00:56:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.969 00:56:04 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:51.969 00:56:04 -- host/discovery.sh@138 -- # get_notification_count 00:21:51.969 00:56:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:51.969 00:56:04 -- host/discovery.sh@74 -- # jq '. | length' 00:21:51.969 00:56:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.969 00:56:04 -- common/autotest_common.sh@10 -- # set +x 00:21:51.969 00:56:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.969 00:56:04 -- host/discovery.sh@74 -- # notification_count=2 00:21:51.969 00:56:04 -- host/discovery.sh@75 -- # notify_id=4 00:21:51.969 00:56:04 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:51.969 00:56:04 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.969 00:56:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.969 00:56:04 -- common/autotest_common.sh@10 -- # set +x 00:21:52.902 [2024-12-03 00:56:05.381015] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:52.902 [2024-12-03 00:56:05.381036] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:52.902 [2024-12-03 00:56:05.381051] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:53.159 [2024-12-03 00:56:05.467115] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:53.159 [2024-12-03 00:56:05.525814] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:53.159 [2024-12-03 00:56:05.525847] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:53.159 00:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.159 00:56:05 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.159 00:56:05 -- common/autotest_common.sh@650 -- # local es=0 00:21:53.159 00:56:05 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.159 00:56:05 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.159 00:56:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.159 00:56:05 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.159 00:56:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.159 00:56:05 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.159 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.159 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 2024/12/03 00:56:05 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:53.159 request: 00:21:53.159 { 00:21:53.159 "method": "bdev_nvme_start_discovery", 00:21:53.159 "params": { 00:21:53.159 "name": "nvme", 00:21:53.159 "trtype": "tcp", 00:21:53.159 "traddr": "10.0.0.2", 00:21:53.159 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:53.159 "adrfam": "ipv4", 00:21:53.159 "trsvcid": "8009", 00:21:53.159 "wait_for_attach": true 00:21:53.159 } 00:21:53.159 } 00:21:53.159 Got JSON-RPC error response 00:21:53.159 GoRPCClient: error on JSON-RPC call 00:21:53.159 00:56:05 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.159 00:56:05 -- common/autotest_common.sh@653 -- # es=1 00:21:53.159 00:56:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.159 00:56:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.159 00:56:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.159 00:56:05 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:53.159 00:56:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:53.159 00:56:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:53.159 00:56:05 -- host/discovery.sh@67 -- # sort 00:21:53.159 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.159 00:56:05 -- host/discovery.sh@67 -- # xargs 00:21:53.159 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 00:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.159 00:56:05 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:53.159 00:56:05 -- host/discovery.sh@147 -- # get_bdev_list 00:21:53.159 00:56:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.159 00:56:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.159 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.159 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 00:56:05 -- host/discovery.sh@55 -- # xargs 00:21:53.159 00:56:05 -- host/discovery.sh@55 -- # sort 00:21:53.160 00:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.160 00:56:05 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.160 00:56:05 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.160 00:56:05 -- common/autotest_common.sh@650 -- # local es=0 00:21:53.160 00:56:05 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.160 00:56:05 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.160 00:56:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.160 00:56:05 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.160 00:56:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.160 00:56:05 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.160 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.160 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:53.417 2024/12/03 00:56:05 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:53.417 request: 00:21:53.417 { 00:21:53.417 "method": "bdev_nvme_start_discovery", 00:21:53.417 "params": { 00:21:53.417 "name": "nvme_second", 00:21:53.417 "trtype": "tcp", 00:21:53.417 "traddr": "10.0.0.2", 00:21:53.417 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:53.417 "adrfam": "ipv4", 00:21:53.417 "trsvcid": "8009", 00:21:53.417 "wait_for_attach": true 00:21:53.417 } 00:21:53.417 } 00:21:53.417 Got JSON-RPC error response 00:21:53.417 GoRPCClient: error on JSON-RPC call 00:21:53.417 00:56:05 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.417 00:56:05 -- common/autotest_common.sh@653 -- # es=1 00:21:53.417 00:56:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.417 00:56:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.417 00:56:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.417 00:56:05 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:53.417 00:56:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:53.417 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.417 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:53.417 00:56:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:53.417 00:56:05 -- host/discovery.sh@67 -- # sort 00:21:53.417 00:56:05 -- host/discovery.sh@67 -- # xargs 00:21:53.417 00:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.417 00:56:05 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:53.417 00:56:05 -- host/discovery.sh@153 -- # get_bdev_list 00:21:53.417 00:56:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.417 00:56:05 -- host/discovery.sh@55 -- # xargs 00:21:53.417 00:56:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.417 00:56:05 -- host/discovery.sh@55 -- # sort 00:21:53.417 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.417 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:53.417 00:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.417 00:56:05 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.417 00:56:05 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:53.417 00:56:05 -- common/autotest_common.sh@650 -- # local es=0 00:21:53.417 00:56:05 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:53.417 00:56:05 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.417 00:56:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.417 00:56:05 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.417 00:56:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.417 00:56:05 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:53.417 00:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.417 00:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:54.353 [2024-12-03 00:56:06.800332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.353 [2024-12-03 00:56:06.800396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.353 [2024-12-03 00:56:06.800424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6af80 with addr=10.0.0.2, port=8010 00:21:54.354 [2024-12-03 00:56:06.800439] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:54.354 [2024-12-03 00:56:06.800447] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:54.354 [2024-12-03 00:56:06.800455] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:55.289 [2024-12-03 00:56:07.800311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.289 [2024-12-03 00:56:07.800379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.289 [2024-12-03 00:56:07.800400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f43ca0 with addr=10.0.0.2, port=8010 00:21:55.289 [2024-12-03 00:56:07.800422] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:55.289 [2024-12-03 00:56:07.800432] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:55.289 [2024-12-03 00:56:07.800439] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:56.668 [2024-12-03 00:56:08.800249] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:56.668 2024/12/03 00:56:08 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:56.668 request: 00:21:56.668 { 00:21:56.668 "method": "bdev_nvme_start_discovery", 00:21:56.668 "params": { 00:21:56.668 "name": "nvme_second", 00:21:56.668 "trtype": "tcp", 00:21:56.668 "traddr": "10.0.0.2", 00:21:56.668 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:56.668 "adrfam": "ipv4", 00:21:56.668 "trsvcid": "8010", 00:21:56.668 "attach_timeout_ms": 3000 00:21:56.668 } 00:21:56.668 } 00:21:56.668 Got JSON-RPC error response 00:21:56.668 GoRPCClient: error on JSON-RPC call 00:21:56.668 00:56:08 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:56.668 00:56:08 -- common/autotest_common.sh@653 -- # es=1 00:21:56.668 00:56:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.668 00:56:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.668 00:56:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.668 00:56:08 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:56.668 00:56:08 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:56.668 00:56:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.668 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:21:56.668 00:56:08 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:56.668 00:56:08 -- host/discovery.sh@67 -- # xargs 00:21:56.668 00:56:08 -- host/discovery.sh@67 -- # sort 00:21:56.668 00:56:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.668 00:56:08 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:56.668 00:56:08 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:56.668 00:56:08 -- host/discovery.sh@162 -- # kill 96419 00:21:56.668 00:56:08 -- host/discovery.sh@163 -- # nvmftestfini 00:21:56.668 00:56:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:56.668 00:56:08 -- nvmf/common.sh@116 -- # sync 00:21:56.668 00:56:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:56.668 00:56:08 -- nvmf/common.sh@119 -- # set +e 00:21:56.668 00:56:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:56.668 00:56:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:56.668 rmmod nvme_tcp 00:21:56.668 rmmod nvme_fabrics 00:21:56.668 rmmod nvme_keyring 00:21:56.668 00:56:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:56.668 00:56:08 -- nvmf/common.sh@123 -- # set -e 00:21:56.668 00:56:08 -- nvmf/common.sh@124 -- # return 0 00:21:56.668 00:56:08 -- nvmf/common.sh@477 -- # '[' -n 96369 ']' 00:21:56.668 00:56:08 -- nvmf/common.sh@478 -- # killprocess 96369 00:21:56.668 00:56:08 -- common/autotest_common.sh@936 -- # '[' -z 96369 ']' 00:21:56.669 00:56:08 -- common/autotest_common.sh@940 -- # kill -0 96369 00:21:56.669 00:56:08 -- common/autotest_common.sh@941 -- # uname 00:21:56.669 00:56:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.669 00:56:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96369 00:21:56.669 00:56:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:56.669 00:56:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:56.669 00:56:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96369' 00:21:56.669 killing process with pid 96369 00:21:56.669 00:56:09 -- common/autotest_common.sh@955 -- # kill 96369 00:21:56.669 00:56:09 -- common/autotest_common.sh@960 -- # wait 96369 00:21:56.929 00:56:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:56.929 00:56:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:56.929 00:56:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:56.929 00:56:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.929 00:56:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:56.929 00:56:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.929 00:56:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.929 00:56:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.929 00:56:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:56.929 00:21:56.929 real 0m14.120s 00:21:56.929 user 0m27.673s 00:21:56.929 sys 0m1.732s 00:21:56.929 00:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:56.929 ************************************ 00:21:56.929 END TEST nvmf_discovery 00:21:56.929 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:21:56.929 ************************************ 00:21:56.929 00:56:09 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:56.929 00:56:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:56.929 00:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:56.929 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:21:56.929 ************************************ 00:21:56.929 START TEST nvmf_discovery_remove_ifc 00:21:56.929 ************************************ 00:21:56.929 00:56:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:56.929 * Looking for test storage... 00:21:56.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:56.929 00:56:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:56.929 00:56:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:56.929 00:56:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:56.929 00:56:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:56.929 00:56:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:56.929 00:56:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:56.929 00:56:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:56.929 00:56:09 -- scripts/common.sh@335 -- # IFS=.-: 00:21:56.929 00:56:09 -- scripts/common.sh@335 -- # read -ra ver1 00:21:56.929 00:56:09 -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.929 00:56:09 -- scripts/common.sh@336 -- # read -ra ver2 00:21:56.929 00:56:09 -- scripts/common.sh@337 -- # local 'op=<' 00:21:56.929 00:56:09 -- scripts/common.sh@339 -- # ver1_l=2 00:21:56.929 00:56:09 -- scripts/common.sh@340 -- # ver2_l=1 00:21:56.929 00:56:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:56.929 00:56:09 -- scripts/common.sh@343 -- # case "$op" in 00:21:56.929 00:56:09 -- scripts/common.sh@344 -- # : 1 00:21:56.929 00:56:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:56.929 00:56:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.929 00:56:09 -- scripts/common.sh@364 -- # decimal 1 00:21:56.929 00:56:09 -- scripts/common.sh@352 -- # local d=1 00:21:56.929 00:56:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.929 00:56:09 -- scripts/common.sh@354 -- # echo 1 00:21:56.929 00:56:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:56.929 00:56:09 -- scripts/common.sh@365 -- # decimal 2 00:21:56.929 00:56:09 -- scripts/common.sh@352 -- # local d=2 00:21:56.929 00:56:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.929 00:56:09 -- scripts/common.sh@354 -- # echo 2 00:21:56.929 00:56:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:56.929 00:56:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:56.929 00:56:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:56.929 00:56:09 -- scripts/common.sh@367 -- # return 0 00:21:56.929 00:56:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.929 00:56:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:56.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.929 --rc genhtml_branch_coverage=1 00:21:56.929 --rc genhtml_function_coverage=1 00:21:56.929 --rc genhtml_legend=1 00:21:56.929 --rc geninfo_all_blocks=1 00:21:56.929 --rc geninfo_unexecuted_blocks=1 00:21:56.929 00:21:56.929 ' 00:21:56.929 00:56:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:56.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.929 --rc genhtml_branch_coverage=1 00:21:56.929 --rc genhtml_function_coverage=1 00:21:56.929 --rc genhtml_legend=1 00:21:56.929 --rc geninfo_all_blocks=1 00:21:56.929 --rc geninfo_unexecuted_blocks=1 00:21:56.929 00:21:56.929 ' 00:21:56.929 00:56:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:56.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.929 --rc genhtml_branch_coverage=1 00:21:56.929 --rc genhtml_function_coverage=1 00:21:56.929 --rc genhtml_legend=1 00:21:56.929 --rc geninfo_all_blocks=1 00:21:56.929 --rc geninfo_unexecuted_blocks=1 00:21:56.929 00:21:56.929 ' 00:21:56.929 00:56:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:56.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.929 --rc genhtml_branch_coverage=1 00:21:56.929 --rc genhtml_function_coverage=1 00:21:56.929 --rc genhtml_legend=1 00:21:56.929 --rc geninfo_all_blocks=1 00:21:56.929 --rc geninfo_unexecuted_blocks=1 00:21:56.929 00:21:56.929 ' 00:21:56.929 00:56:09 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.929 00:56:09 -- nvmf/common.sh@7 -- # uname -s 00:21:56.929 00:56:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.929 00:56:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.929 00:56:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.929 00:56:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.929 00:56:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.929 00:56:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.929 00:56:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.929 00:56:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.929 00:56:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.929 00:56:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.929 00:56:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:21:56.929 00:56:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:21:56.929 00:56:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.929 00:56:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.929 00:56:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.929 00:56:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.929 00:56:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.929 00:56:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.929 00:56:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.929 00:56:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.929 00:56:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.929 00:56:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.929 00:56:09 -- paths/export.sh@5 -- # export PATH 00:21:56.929 00:56:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.929 00:56:09 -- nvmf/common.sh@46 -- # : 0 00:21:56.929 00:56:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.929 00:56:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.929 00:56:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.929 00:56:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.930 00:56:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.930 00:56:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.930 00:56:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.930 00:56:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:56.930 00:56:09 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:56.930 00:56:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.930 00:56:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.930 00:56:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.930 00:56:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.930 00:56:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.930 00:56:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.930 00:56:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.930 00:56:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.930 00:56:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:56.930 00:56:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:56.930 00:56:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:57.189 00:56:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:57.189 00:56:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:57.189 00:56:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:57.189 00:56:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.189 00:56:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.189 00:56:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:57.189 00:56:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:57.189 00:56:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:57.189 00:56:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:57.189 00:56:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:57.189 00:56:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.189 00:56:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:57.189 00:56:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:57.189 00:56:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:57.189 00:56:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:57.189 00:56:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:57.189 00:56:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:57.189 Cannot find device "nvmf_tgt_br" 00:21:57.189 00:56:09 -- nvmf/common.sh@154 -- # true 00:21:57.189 00:56:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:57.189 Cannot find device "nvmf_tgt_br2" 00:21:57.189 00:56:09 -- nvmf/common.sh@155 -- # true 00:21:57.189 00:56:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:57.189 00:56:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:57.189 Cannot find device "nvmf_tgt_br" 00:21:57.189 00:56:09 -- nvmf/common.sh@157 -- # true 00:21:57.189 00:56:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:57.189 Cannot find device "nvmf_tgt_br2" 00:21:57.189 00:56:09 -- nvmf/common.sh@158 -- # true 00:21:57.189 00:56:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:57.189 00:56:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:57.189 00:56:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:57.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:57.189 00:56:09 -- nvmf/common.sh@161 -- # true 00:21:57.189 00:56:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:57.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:57.189 00:56:09 -- nvmf/common.sh@162 -- # true 00:21:57.189 00:56:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:57.189 00:56:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:57.189 00:56:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:57.189 00:56:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:57.189 00:56:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:57.189 00:56:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:57.189 00:56:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:57.189 00:56:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:57.189 00:56:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:57.189 00:56:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:57.189 00:56:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:57.189 00:56:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:57.189 00:56:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:57.189 00:56:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:57.189 00:56:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:57.189 00:56:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:57.189 00:56:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:57.189 00:56:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:57.189 00:56:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:57.189 00:56:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:57.448 00:56:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:57.448 00:56:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:57.448 00:56:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:57.448 00:56:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:57.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:57.448 00:21:57.448 --- 10.0.0.2 ping statistics --- 00:21:57.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.448 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:57.448 00:56:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:57.448 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:57.448 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:21:57.448 00:21:57.448 --- 10.0.0.3 ping statistics --- 00:21:57.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.448 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:57.448 00:56:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:57.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:57.449 00:21:57.449 --- 10.0.0.1 ping statistics --- 00:21:57.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.449 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:57.449 00:56:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.449 00:56:09 -- nvmf/common.sh@421 -- # return 0 00:21:57.449 00:56:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.449 00:56:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.449 00:56:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:57.449 00:56:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:57.449 00:56:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.449 00:56:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:57.449 00:56:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:57.449 00:56:09 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:57.449 00:56:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.449 00:56:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.449 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:21:57.449 00:56:09 -- nvmf/common.sh@469 -- # nvmfpid=96934 00:21:57.449 00:56:09 -- nvmf/common.sh@470 -- # waitforlisten 96934 00:21:57.449 00:56:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:57.449 00:56:09 -- common/autotest_common.sh@829 -- # '[' -z 96934 ']' 00:21:57.449 00:56:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.449 00:56:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.449 00:56:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.449 00:56:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.449 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:21:57.449 [2024-12-03 00:56:09.844843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:57.449 [2024-12-03 00:56:09.845666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.708 [2024-12-03 00:56:09.994506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.708 [2024-12-03 00:56:10.075028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.708 [2024-12-03 00:56:10.075209] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.708 [2024-12-03 00:56:10.075227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.708 [2024-12-03 00:56:10.075238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.708 [2024-12-03 00:56:10.075279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.648 00:56:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.648 00:56:10 -- common/autotest_common.sh@862 -- # return 0 00:21:58.648 00:56:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.648 00:56:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.648 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:21:58.648 00:56:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.649 00:56:10 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:58.649 00:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.649 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:21:58.649 [2024-12-03 00:56:10.871312] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.649 [2024-12-03 00:56:10.879503] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:58.649 null0 00:21:58.649 [2024-12-03 00:56:10.911376] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.649 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:58.649 00:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.649 00:56:10 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96984 00:21:58.649 00:56:10 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:58.649 00:56:10 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96984 /tmp/host.sock 00:21:58.649 00:56:10 -- common/autotest_common.sh@829 -- # '[' -z 96984 ']' 00:21:58.649 00:56:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:58.649 00:56:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.649 00:56:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:58.649 00:56:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.649 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:21:58.649 [2024-12-03 00:56:10.990301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:58.649 [2024-12-03 00:56:10.990607] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96984 ] 00:21:58.649 [2024-12-03 00:56:11.131917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.907 [2024-12-03 00:56:11.222564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:58.907 [2024-12-03 00:56:11.223105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.841 00:56:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.841 00:56:11 -- common/autotest_common.sh@862 -- # return 0 00:21:59.841 00:56:11 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.841 00:56:11 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:59.841 00:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.841 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:21:59.841 00:56:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.841 00:56:11 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:59.841 00:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.841 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:21:59.841 00:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.841 00:56:12 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:59.841 00:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.841 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:00.778 [2024-12-03 00:56:13.124455] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:00.778 [2024-12-03 00:56:13.124497] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:00.778 [2024-12-03 00:56:13.124513] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:00.778 [2024-12-03 00:56:13.210547] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:00.778 [2024-12-03 00:56:13.266252] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:00.778 [2024-12-03 00:56:13.266843] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:00.778 [2024-12-03 00:56:13.266886] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:00.778 [2024-12-03 00:56:13.266912] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:00.778 [2024-12-03 00:56:13.266940] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:00.778 00:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.778 00:56:13 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:00.778 00:56:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:00.778 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:00.778 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:00.778 00:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.778 [2024-12-03 00:56:13.273171] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c60da0 was disconnected and freed. delete nvme_qpair. 00:22:00.778 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:00.778 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:00.778 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.037 00:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.037 00:56:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:01.037 00:56:13 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:01.037 00:56:13 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:01.037 00:56:13 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:01.037 00:56:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:01.038 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.038 00:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.038 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:01.038 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:01.038 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:01.038 00:56:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.038 00:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.038 00:56:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:01.038 00:56:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:01.974 00:56:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.974 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:01.974 00:56:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:01.974 00:56:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.358 00:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.358 00:56:15 -- common/autotest_common.sh@10 -- # set +x 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.358 00:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.358 00:56:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.341 00:56:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.341 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.341 00:56:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.341 00:56:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.276 00:56:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.277 00:56:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.277 00:56:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.277 00:56:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.277 00:56:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.277 00:56:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.277 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:05.277 00:56:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.277 00:56:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.277 00:56:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.213 00:56:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.213 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.213 00:56:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.213 [2024-12-03 00:56:18.694383] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:06.213 [2024-12-03 00:56:18.694462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.213 [2024-12-03 00:56:18.694478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.213 [2024-12-03 00:56:18.694505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.213 [2024-12-03 00:56:18.694531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.213 [2024-12-03 00:56:18.694540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.213 [2024-12-03 00:56:18.694548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.213 [2024-12-03 00:56:18.694556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.213 [2024-12-03 00:56:18.694564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.213 [2024-12-03 00:56:18.694572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.213 [2024-12-03 00:56:18.694580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.213 [2024-12-03 00:56:18.694588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca690 is same with the state(5) to be set 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.213 00:56:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.213 [2024-12-03 00:56:18.704378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bca690 (9): Bad file descriptor 00:22:06.213 [2024-12-03 00:56:18.714406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:07.590 00:56:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.590 00:56:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.590 00:56:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.590 00:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.590 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:07.590 00:56:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.590 00:56:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.590 [2024-12-03 00:56:19.732537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:08.525 [2024-12-03 00:56:20.756554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:08.525 [2024-12-03 00:56:20.756662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bca690 with addr=10.0.0.2, port=4420 00:22:08.525 [2024-12-03 00:56:20.756697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca690 is same with the state(5) to be set 00:22:08.526 [2024-12-03 00:56:20.756756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.526 [2024-12-03 00:56:20.756779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.526 [2024-12-03 00:56:20.756799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.526 [2024-12-03 00:56:20.756820] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:08.526 [2024-12-03 00:56:20.757644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bca690 (9): Bad file descriptor 00:22:08.526 [2024-12-03 00:56:20.757710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.526 [2024-12-03 00:56:20.757763] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:08.526 [2024-12-03 00:56:20.757831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.526 [2024-12-03 00:56:20.757861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.526 [2024-12-03 00:56:20.757888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.526 [2024-12-03 00:56:20.757909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.526 [2024-12-03 00:56:20.757930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.526 [2024-12-03 00:56:20.757951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.526 [2024-12-03 00:56:20.757973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.526 [2024-12-03 00:56:20.757992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.526 [2024-12-03 00:56:20.758014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.526 [2024-12-03 00:56:20.758034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.526 [2024-12-03 00:56:20.758054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:08.526 [2024-12-03 00:56:20.758115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c28410 (9): Bad file descriptor 00:22:08.526 [2024-12-03 00:56:20.759115] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:08.526 [2024-12-03 00:56:20.759171] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:08.526 00:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.526 00:56:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.526 00:56:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.461 00:56:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.461 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.461 00:56:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.461 00:56:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.461 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.461 00:56:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:09.461 00:56:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.396 [2024-12-03 00:56:22.765506] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:10.396 [2024-12-03 00:56:22.765529] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:10.396 [2024-12-03 00:56:22.765546] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:10.396 [2024-12-03 00:56:22.851592] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:10.396 [2024-12-03 00:56:22.906778] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:10.396 [2024-12-03 00:56:22.906863] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:10.396 [2024-12-03 00:56:22.906886] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:10.396 [2024-12-03 00:56:22.906900] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:10.396 [2024-12-03 00:56:22.906907] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:10.655 [2024-12-03 00:56:22.914140] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c2e0c0 was disconnected and freed. delete nvme_qpair. 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.655 00:56:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.655 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:10.655 00:56:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:10.655 00:56:22 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96984 00:22:10.655 00:56:22 -- common/autotest_common.sh@936 -- # '[' -z 96984 ']' 00:22:10.655 00:56:22 -- common/autotest_common.sh@940 -- # kill -0 96984 00:22:10.655 00:56:22 -- common/autotest_common.sh@941 -- # uname 00:22:10.655 00:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.655 00:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96984 00:22:10.655 killing process with pid 96984 00:22:10.655 00:56:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:10.655 00:56:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:10.655 00:56:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96984' 00:22:10.655 00:56:23 -- common/autotest_common.sh@955 -- # kill 96984 00:22:10.655 00:56:23 -- common/autotest_common.sh@960 -- # wait 96984 00:22:10.914 00:56:23 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:10.914 00:56:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:10.914 00:56:23 -- nvmf/common.sh@116 -- # sync 00:22:10.914 00:56:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:10.914 00:56:23 -- nvmf/common.sh@119 -- # set +e 00:22:10.914 00:56:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:10.914 00:56:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:10.914 rmmod nvme_tcp 00:22:10.914 rmmod nvme_fabrics 00:22:10.914 rmmod nvme_keyring 00:22:10.914 00:56:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:10.914 00:56:23 -- nvmf/common.sh@123 -- # set -e 00:22:10.914 00:56:23 -- nvmf/common.sh@124 -- # return 0 00:22:10.914 00:56:23 -- nvmf/common.sh@477 -- # '[' -n 96934 ']' 00:22:10.914 00:56:23 -- nvmf/common.sh@478 -- # killprocess 96934 00:22:10.914 00:56:23 -- common/autotest_common.sh@936 -- # '[' -z 96934 ']' 00:22:10.914 00:56:23 -- common/autotest_common.sh@940 -- # kill -0 96934 00:22:10.914 00:56:23 -- common/autotest_common.sh@941 -- # uname 00:22:10.914 00:56:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.914 00:56:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96934 00:22:10.914 killing process with pid 96934 00:22:10.914 00:56:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:10.914 00:56:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:10.914 00:56:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96934' 00:22:10.914 00:56:23 -- common/autotest_common.sh@955 -- # kill 96934 00:22:10.914 00:56:23 -- common/autotest_common.sh@960 -- # wait 96934 00:22:11.173 00:56:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:11.173 00:56:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:11.173 00:56:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:11.173 00:56:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.173 00:56:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:11.173 00:56:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.173 00:56:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.173 00:56:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.173 00:56:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:11.173 00:22:11.173 real 0m14.342s 00:22:11.173 user 0m24.656s 00:22:11.173 sys 0m1.554s 00:22:11.173 ************************************ 00:22:11.173 END TEST nvmf_discovery_remove_ifc 00:22:11.173 ************************************ 00:22:11.173 00:56:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.173 00:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.173 00:56:23 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:11.173 00:56:23 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:11.173 00:56:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:11.173 00:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.173 00:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.173 ************************************ 00:22:11.173 START TEST nvmf_digest 00:22:11.173 ************************************ 00:22:11.173 00:56:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:11.432 * Looking for test storage... 00:22:11.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.432 00:56:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:11.432 00:56:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:11.432 00:56:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:11.432 00:56:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:11.432 00:56:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:11.432 00:56:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:11.432 00:56:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:11.432 00:56:23 -- scripts/common.sh@335 -- # IFS=.-: 00:22:11.432 00:56:23 -- scripts/common.sh@335 -- # read -ra ver1 00:22:11.432 00:56:23 -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.432 00:56:23 -- scripts/common.sh@336 -- # read -ra ver2 00:22:11.432 00:56:23 -- scripts/common.sh@337 -- # local 'op=<' 00:22:11.432 00:56:23 -- scripts/common.sh@339 -- # ver1_l=2 00:22:11.432 00:56:23 -- scripts/common.sh@340 -- # ver2_l=1 00:22:11.432 00:56:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:11.432 00:56:23 -- scripts/common.sh@343 -- # case "$op" in 00:22:11.432 00:56:23 -- scripts/common.sh@344 -- # : 1 00:22:11.432 00:56:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:11.432 00:56:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.432 00:56:23 -- scripts/common.sh@364 -- # decimal 1 00:22:11.432 00:56:23 -- scripts/common.sh@352 -- # local d=1 00:22:11.432 00:56:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.432 00:56:23 -- scripts/common.sh@354 -- # echo 1 00:22:11.432 00:56:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:11.432 00:56:23 -- scripts/common.sh@365 -- # decimal 2 00:22:11.432 00:56:23 -- scripts/common.sh@352 -- # local d=2 00:22:11.432 00:56:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.432 00:56:23 -- scripts/common.sh@354 -- # echo 2 00:22:11.432 00:56:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:11.432 00:56:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:11.432 00:56:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:11.432 00:56:23 -- scripts/common.sh@367 -- # return 0 00:22:11.432 00:56:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.432 00:56:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:11.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.432 --rc genhtml_branch_coverage=1 00:22:11.432 --rc genhtml_function_coverage=1 00:22:11.432 --rc genhtml_legend=1 00:22:11.432 --rc geninfo_all_blocks=1 00:22:11.432 --rc geninfo_unexecuted_blocks=1 00:22:11.432 00:22:11.432 ' 00:22:11.432 00:56:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:11.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.432 --rc genhtml_branch_coverage=1 00:22:11.432 --rc genhtml_function_coverage=1 00:22:11.432 --rc genhtml_legend=1 00:22:11.432 --rc geninfo_all_blocks=1 00:22:11.432 --rc geninfo_unexecuted_blocks=1 00:22:11.432 00:22:11.432 ' 00:22:11.432 00:56:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:11.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.432 --rc genhtml_branch_coverage=1 00:22:11.432 --rc genhtml_function_coverage=1 00:22:11.432 --rc genhtml_legend=1 00:22:11.432 --rc geninfo_all_blocks=1 00:22:11.432 --rc geninfo_unexecuted_blocks=1 00:22:11.432 00:22:11.432 ' 00:22:11.432 00:56:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:11.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.432 --rc genhtml_branch_coverage=1 00:22:11.432 --rc genhtml_function_coverage=1 00:22:11.432 --rc genhtml_legend=1 00:22:11.432 --rc geninfo_all_blocks=1 00:22:11.432 --rc geninfo_unexecuted_blocks=1 00:22:11.432 00:22:11.432 ' 00:22:11.432 00:56:23 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.432 00:56:23 -- nvmf/common.sh@7 -- # uname -s 00:22:11.432 00:56:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.432 00:56:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.432 00:56:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.432 00:56:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.432 00:56:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.432 00:56:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.432 00:56:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.432 00:56:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.432 00:56:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.432 00:56:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.432 00:56:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:22:11.432 00:56:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:22:11.432 00:56:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.432 00:56:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.432 00:56:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.432 00:56:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.432 00:56:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.432 00:56:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.432 00:56:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.432 00:56:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.433 00:56:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.433 00:56:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.433 00:56:23 -- paths/export.sh@5 -- # export PATH 00:22:11.433 00:56:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.433 00:56:23 -- nvmf/common.sh@46 -- # : 0 00:22:11.433 00:56:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:11.433 00:56:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:11.433 00:56:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:11.433 00:56:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.433 00:56:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.433 00:56:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:11.433 00:56:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:11.433 00:56:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:11.433 00:56:23 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:11.433 00:56:23 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:11.433 00:56:23 -- host/digest.sh@16 -- # runtime=2 00:22:11.433 00:56:23 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:11.433 00:56:23 -- host/digest.sh@132 -- # nvmftestinit 00:22:11.433 00:56:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:11.433 00:56:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.433 00:56:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:11.433 00:56:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:11.433 00:56:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:11.433 00:56:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.433 00:56:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.433 00:56:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.433 00:56:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:11.433 00:56:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:11.433 00:56:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:11.433 00:56:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:11.433 00:56:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:11.433 00:56:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:11.433 00:56:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.433 00:56:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.433 00:56:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:11.433 00:56:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:11.433 00:56:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.433 00:56:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.433 00:56:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.433 00:56:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.433 00:56:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.433 00:56:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.433 00:56:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.433 00:56:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.433 00:56:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:11.433 00:56:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:11.433 Cannot find device "nvmf_tgt_br" 00:22:11.433 00:56:23 -- nvmf/common.sh@154 -- # true 00:22:11.433 00:56:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.433 Cannot find device "nvmf_tgt_br2" 00:22:11.433 00:56:23 -- nvmf/common.sh@155 -- # true 00:22:11.433 00:56:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:11.433 00:56:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:11.433 Cannot find device "nvmf_tgt_br" 00:22:11.433 00:56:23 -- nvmf/common.sh@157 -- # true 00:22:11.433 00:56:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:11.433 Cannot find device "nvmf_tgt_br2" 00:22:11.433 00:56:23 -- nvmf/common.sh@158 -- # true 00:22:11.433 00:56:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:11.692 00:56:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:11.692 00:56:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.692 00:56:24 -- nvmf/common.sh@161 -- # true 00:22:11.692 00:56:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.692 00:56:24 -- nvmf/common.sh@162 -- # true 00:22:11.692 00:56:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.692 00:56:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.692 00:56:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.692 00:56:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.692 00:56:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.692 00:56:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.692 00:56:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.692 00:56:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:11.692 00:56:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:11.692 00:56:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:11.692 00:56:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:11.692 00:56:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:11.692 00:56:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:11.692 00:56:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.692 00:56:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.692 00:56:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.692 00:56:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:11.692 00:56:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:11.692 00:56:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.692 00:56:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.692 00:56:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.692 00:56:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.692 00:56:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.692 00:56:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:11.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:22:11.692 00:22:11.692 --- 10.0.0.2 ping statistics --- 00:22:11.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.692 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:11.692 00:56:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:11.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:22:11.692 00:22:11.692 --- 10.0.0.3 ping statistics --- 00:22:11.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.692 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:11.692 00:56:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:11.692 00:22:11.692 --- 10.0.0.1 ping statistics --- 00:22:11.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.692 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:11.692 00:56:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.692 00:56:24 -- nvmf/common.sh@421 -- # return 0 00:22:11.692 00:56:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.692 00:56:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.692 00:56:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.692 00:56:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.692 00:56:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.692 00:56:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.692 00:56:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:11.692 00:56:24 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:11.692 00:56:24 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:11.692 00:56:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:11.692 00:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.692 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:11.952 ************************************ 00:22:11.952 START TEST nvmf_digest_clean 00:22:11.952 ************************************ 00:22:11.952 00:56:24 -- common/autotest_common.sh@1114 -- # run_digest 00:22:11.952 00:56:24 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:11.952 00:56:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:11.952 00:56:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.952 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:11.952 00:56:24 -- nvmf/common.sh@469 -- # nvmfpid=97405 00:22:11.952 00:56:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:11.952 00:56:24 -- nvmf/common.sh@470 -- # waitforlisten 97405 00:22:11.952 00:56:24 -- common/autotest_common.sh@829 -- # '[' -z 97405 ']' 00:22:11.952 00:56:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.952 00:56:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.952 00:56:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.952 00:56:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.952 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:11.952 [2024-12-03 00:56:24.272556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:11.952 [2024-12-03 00:56:24.272641] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.952 [2024-12-03 00:56:24.419021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.211 [2024-12-03 00:56:24.498755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:12.211 [2024-12-03 00:56:24.498969] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.211 [2024-12-03 00:56:24.498990] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.211 [2024-12-03 00:56:24.499003] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.211 [2024-12-03 00:56:24.499047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.792 00:56:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.792 00:56:25 -- common/autotest_common.sh@862 -- # return 0 00:22:12.792 00:56:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:12.792 00:56:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.792 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:13.052 00:56:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.052 00:56:25 -- host/digest.sh@120 -- # common_target_config 00:22:13.052 00:56:25 -- host/digest.sh@43 -- # rpc_cmd 00:22:13.052 00:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.052 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:13.052 null0 00:22:13.052 [2024-12-03 00:56:25.462444] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.052 [2024-12-03 00:56:25.486619] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.052 00:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.052 00:56:25 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:13.052 00:56:25 -- host/digest.sh@77 -- # local rw bs qd 00:22:13.052 00:56:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:13.052 00:56:25 -- host/digest.sh@80 -- # rw=randread 00:22:13.052 00:56:25 -- host/digest.sh@80 -- # bs=4096 00:22:13.052 00:56:25 -- host/digest.sh@80 -- # qd=128 00:22:13.052 00:56:25 -- host/digest.sh@82 -- # bperfpid=97455 00:22:13.052 00:56:25 -- host/digest.sh@83 -- # waitforlisten 97455 /var/tmp/bperf.sock 00:22:13.052 00:56:25 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:13.052 00:56:25 -- common/autotest_common.sh@829 -- # '[' -z 97455 ']' 00:22:13.052 00:56:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:13.052 00:56:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.052 00:56:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:13.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:13.052 00:56:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.052 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:13.052 [2024-12-03 00:56:25.548116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:13.052 [2024-12-03 00:56:25.548209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97455 ] 00:22:13.310 [2024-12-03 00:56:25.689246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.310 [2024-12-03 00:56:25.764877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.243 00:56:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.243 00:56:26 -- common/autotest_common.sh@862 -- # return 0 00:22:14.243 00:56:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:14.243 00:56:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:14.243 00:56:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:14.502 00:56:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.502 00:56:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.760 nvme0n1 00:22:14.760 00:56:27 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:14.760 00:56:27 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:15.018 Running I/O for 2 seconds... 00:22:16.923 00:22:16.923 Latency(us) 00:22:16.923 [2024-12-03T00:56:29.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.923 [2024-12-03T00:56:29.438Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:16.923 nvme0n1 : 2.00 24985.06 97.60 0.00 0.00 5117.85 2398.02 12094.37 00:22:16.923 [2024-12-03T00:56:29.438Z] =================================================================================================================== 00:22:16.923 [2024-12-03T00:56:29.438Z] Total : 24985.06 97.60 0.00 0.00 5117.85 2398.02 12094.37 00:22:16.923 0 00:22:16.923 00:56:29 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:16.923 00:56:29 -- host/digest.sh@92 -- # get_accel_stats 00:22:16.923 00:56:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:16.923 00:56:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:16.923 00:56:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:16.923 | select(.opcode=="crc32c") 00:22:16.923 | "\(.module_name) \(.executed)"' 00:22:17.182 00:56:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:17.182 00:56:29 -- host/digest.sh@93 -- # exp_module=software 00:22:17.182 00:56:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:17.182 00:56:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:17.182 00:56:29 -- host/digest.sh@97 -- # killprocess 97455 00:22:17.182 00:56:29 -- common/autotest_common.sh@936 -- # '[' -z 97455 ']' 00:22:17.182 00:56:29 -- common/autotest_common.sh@940 -- # kill -0 97455 00:22:17.182 00:56:29 -- common/autotest_common.sh@941 -- # uname 00:22:17.182 00:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:17.182 00:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97455 00:22:17.182 killing process with pid 97455 00:22:17.182 Received shutdown signal, test time was about 2.000000 seconds 00:22:17.182 00:22:17.182 Latency(us) 00:22:17.182 [2024-12-03T00:56:29.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.182 [2024-12-03T00:56:29.697Z] =================================================================================================================== 00:22:17.182 [2024-12-03T00:56:29.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.182 00:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:17.182 00:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:17.182 00:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97455' 00:22:17.182 00:56:29 -- common/autotest_common.sh@955 -- # kill 97455 00:22:17.182 00:56:29 -- common/autotest_common.sh@960 -- # wait 97455 00:22:17.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:17.440 00:56:29 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:17.440 00:56:29 -- host/digest.sh@77 -- # local rw bs qd 00:22:17.440 00:56:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:17.440 00:56:29 -- host/digest.sh@80 -- # rw=randread 00:22:17.440 00:56:29 -- host/digest.sh@80 -- # bs=131072 00:22:17.440 00:56:29 -- host/digest.sh@80 -- # qd=16 00:22:17.440 00:56:29 -- host/digest.sh@82 -- # bperfpid=97551 00:22:17.440 00:56:29 -- host/digest.sh@83 -- # waitforlisten 97551 /var/tmp/bperf.sock 00:22:17.440 00:56:29 -- common/autotest_common.sh@829 -- # '[' -z 97551 ']' 00:22:17.440 00:56:29 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:17.440 00:56:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:17.440 00:56:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.440 00:56:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:17.440 00:56:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.440 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:17.440 [2024-12-03 00:56:29.937523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:17.440 [2024-12-03 00:56:29.937611] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97551 ] 00:22:17.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:17.440 Zero copy mechanism will not be used. 00:22:17.698 [2024-12-03 00:56:30.066652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.698 [2024-12-03 00:56:30.133498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.698 00:56:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.698 00:56:30 -- common/autotest_common.sh@862 -- # return 0 00:22:17.698 00:56:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:17.698 00:56:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:17.698 00:56:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:18.264 00:56:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.264 00:56:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.522 nvme0n1 00:22:18.522 00:56:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:18.522 00:56:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:18.522 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:18.523 Zero copy mechanism will not be used. 00:22:18.523 Running I/O for 2 seconds... 00:22:21.057 00:22:21.058 Latency(us) 00:22:21.058 [2024-12-03T00:56:33.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.058 [2024-12-03T00:56:33.573Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:21.058 nvme0n1 : 2.00 9158.50 1144.81 0.00 0.00 1744.51 569.72 7864.32 00:22:21.058 [2024-12-03T00:56:33.573Z] =================================================================================================================== 00:22:21.058 [2024-12-03T00:56:33.573Z] Total : 9158.50 1144.81 0.00 0.00 1744.51 569.72 7864.32 00:22:21.058 0 00:22:21.058 00:56:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:21.058 00:56:32 -- host/digest.sh@92 -- # get_accel_stats 00:22:21.058 00:56:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:21.058 00:56:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:21.058 00:56:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:21.058 | select(.opcode=="crc32c") 00:22:21.058 | "\(.module_name) \(.executed)"' 00:22:21.058 00:56:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:21.058 00:56:33 -- host/digest.sh@93 -- # exp_module=software 00:22:21.058 00:56:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:21.058 00:56:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:21.058 00:56:33 -- host/digest.sh@97 -- # killprocess 97551 00:22:21.058 00:56:33 -- common/autotest_common.sh@936 -- # '[' -z 97551 ']' 00:22:21.058 00:56:33 -- common/autotest_common.sh@940 -- # kill -0 97551 00:22:21.058 00:56:33 -- common/autotest_common.sh@941 -- # uname 00:22:21.058 00:56:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:21.058 00:56:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97551 00:22:21.058 killing process with pid 97551 00:22:21.058 Received shutdown signal, test time was about 2.000000 seconds 00:22:21.058 00:22:21.058 Latency(us) 00:22:21.058 [2024-12-03T00:56:33.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.058 [2024-12-03T00:56:33.573Z] =================================================================================================================== 00:22:21.058 [2024-12-03T00:56:33.573Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.058 00:56:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:21.058 00:56:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:21.058 00:56:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97551' 00:22:21.058 00:56:33 -- common/autotest_common.sh@955 -- # kill 97551 00:22:21.058 00:56:33 -- common/autotest_common.sh@960 -- # wait 97551 00:22:21.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:21.058 00:56:33 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:21.058 00:56:33 -- host/digest.sh@77 -- # local rw bs qd 00:22:21.058 00:56:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:21.058 00:56:33 -- host/digest.sh@80 -- # rw=randwrite 00:22:21.058 00:56:33 -- host/digest.sh@80 -- # bs=4096 00:22:21.058 00:56:33 -- host/digest.sh@80 -- # qd=128 00:22:21.058 00:56:33 -- host/digest.sh@82 -- # bperfpid=97622 00:22:21.058 00:56:33 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:21.058 00:56:33 -- host/digest.sh@83 -- # waitforlisten 97622 /var/tmp/bperf.sock 00:22:21.058 00:56:33 -- common/autotest_common.sh@829 -- # '[' -z 97622 ']' 00:22:21.058 00:56:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:21.058 00:56:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.058 00:56:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:21.058 00:56:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.058 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:22:21.058 [2024-12-03 00:56:33.570028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:21.058 [2024-12-03 00:56:33.570110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97622 ] 00:22:21.317 [2024-12-03 00:56:33.700071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.317 [2024-12-03 00:56:33.759921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.317 00:56:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.317 00:56:33 -- common/autotest_common.sh@862 -- # return 0 00:22:21.317 00:56:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:21.317 00:56:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:21.317 00:56:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:21.883 00:56:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.883 00:56:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:22.142 nvme0n1 00:22:22.142 00:56:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:22.142 00:56:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:22.142 Running I/O for 2 seconds... 00:22:24.052 00:22:24.052 Latency(us) 00:22:24.052 [2024-12-03T00:56:36.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.052 [2024-12-03T00:56:36.567Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:24.052 nvme0n1 : 2.00 28529.51 111.44 0.00 0.00 4482.78 1869.27 10545.34 00:22:24.052 [2024-12-03T00:56:36.567Z] =================================================================================================================== 00:22:24.052 [2024-12-03T00:56:36.567Z] Total : 28529.51 111.44 0.00 0.00 4482.78 1869.27 10545.34 00:22:24.052 0 00:22:24.052 00:56:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:24.052 00:56:36 -- host/digest.sh@92 -- # get_accel_stats 00:22:24.052 00:56:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:24.052 00:56:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:24.052 | select(.opcode=="crc32c") 00:22:24.052 | "\(.module_name) \(.executed)"' 00:22:24.052 00:56:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:24.310 00:56:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:24.310 00:56:36 -- host/digest.sh@93 -- # exp_module=software 00:22:24.310 00:56:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:24.310 00:56:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:24.310 00:56:36 -- host/digest.sh@97 -- # killprocess 97622 00:22:24.310 00:56:36 -- common/autotest_common.sh@936 -- # '[' -z 97622 ']' 00:22:24.310 00:56:36 -- common/autotest_common.sh@940 -- # kill -0 97622 00:22:24.311 00:56:36 -- common/autotest_common.sh@941 -- # uname 00:22:24.311 00:56:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.311 00:56:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97622 00:22:24.311 00:56:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:24.311 00:56:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:24.311 killing process with pid 97622 00:22:24.311 00:56:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97622' 00:22:24.311 Received shutdown signal, test time was about 2.000000 seconds 00:22:24.311 00:22:24.311 Latency(us) 00:22:24.311 [2024-12-03T00:56:36.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.311 [2024-12-03T00:56:36.826Z] =================================================================================================================== 00:22:24.311 [2024-12-03T00:56:36.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.311 00:56:36 -- common/autotest_common.sh@955 -- # kill 97622 00:22:24.311 00:56:36 -- common/autotest_common.sh@960 -- # wait 97622 00:22:24.569 00:56:37 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:24.569 00:56:37 -- host/digest.sh@77 -- # local rw bs qd 00:22:24.569 00:56:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:24.569 00:56:37 -- host/digest.sh@80 -- # rw=randwrite 00:22:24.569 00:56:37 -- host/digest.sh@80 -- # bs=131072 00:22:24.569 00:56:37 -- host/digest.sh@80 -- # qd=16 00:22:24.569 00:56:37 -- host/digest.sh@82 -- # bperfpid=97700 00:22:24.569 00:56:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:24.569 00:56:37 -- host/digest.sh@83 -- # waitforlisten 97700 /var/tmp/bperf.sock 00:22:24.569 00:56:37 -- common/autotest_common.sh@829 -- # '[' -z 97700 ']' 00:22:24.569 00:56:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.570 00:56:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.570 00:56:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.570 00:56:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.570 00:56:37 -- common/autotest_common.sh@10 -- # set +x 00:22:24.570 [2024-12-03 00:56:37.081120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:24.570 [2024-12-03 00:56:37.081571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97700 ] 00:22:24.570 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:24.570 Zero copy mechanism will not be used. 00:22:24.828 [2024-12-03 00:56:37.210471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.828 [2024-12-03 00:56:37.270877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.828 00:56:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.828 00:56:37 -- common/autotest_common.sh@862 -- # return 0 00:22:24.828 00:56:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:24.828 00:56:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:24.828 00:56:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.396 00:56:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.396 00:56:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.655 nvme0n1 00:22:25.655 00:56:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:25.655 00:56:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.655 Zero copy mechanism will not be used. 00:22:25.655 Running I/O for 2 seconds... 00:22:28.189 00:22:28.189 Latency(us) 00:22:28.189 [2024-12-03T00:56:40.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.189 [2024-12-03T00:56:40.704Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:28.189 nvme0n1 : 2.00 7803.39 975.42 0.00 0.00 2046.15 1623.51 10366.60 00:22:28.189 [2024-12-03T00:56:40.704Z] =================================================================================================================== 00:22:28.189 [2024-12-03T00:56:40.704Z] Total : 7803.39 975.42 0.00 0.00 2046.15 1623.51 10366.60 00:22:28.189 0 00:22:28.189 00:56:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:28.189 00:56:40 -- host/digest.sh@92 -- # get_accel_stats 00:22:28.189 00:56:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:28.189 00:56:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:28.189 | select(.opcode=="crc32c") 00:22:28.189 | "\(.module_name) \(.executed)"' 00:22:28.189 00:56:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:28.189 00:56:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:28.189 00:56:40 -- host/digest.sh@93 -- # exp_module=software 00:22:28.189 00:56:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:28.189 00:56:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:28.189 00:56:40 -- host/digest.sh@97 -- # killprocess 97700 00:22:28.189 00:56:40 -- common/autotest_common.sh@936 -- # '[' -z 97700 ']' 00:22:28.189 00:56:40 -- common/autotest_common.sh@940 -- # kill -0 97700 00:22:28.189 00:56:40 -- common/autotest_common.sh@941 -- # uname 00:22:28.189 00:56:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.189 00:56:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97700 00:22:28.189 killing process with pid 97700 00:22:28.189 Received shutdown signal, test time was about 2.000000 seconds 00:22:28.189 00:22:28.189 Latency(us) 00:22:28.189 [2024-12-03T00:56:40.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.189 [2024-12-03T00:56:40.704Z] =================================================================================================================== 00:22:28.189 [2024-12-03T00:56:40.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.189 00:56:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:28.189 00:56:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:28.189 00:56:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97700' 00:22:28.189 00:56:40 -- common/autotest_common.sh@955 -- # kill 97700 00:22:28.189 00:56:40 -- common/autotest_common.sh@960 -- # wait 97700 00:22:28.189 00:56:40 -- host/digest.sh@126 -- # killprocess 97405 00:22:28.189 00:56:40 -- common/autotest_common.sh@936 -- # '[' -z 97405 ']' 00:22:28.189 00:56:40 -- common/autotest_common.sh@940 -- # kill -0 97405 00:22:28.189 00:56:40 -- common/autotest_common.sh@941 -- # uname 00:22:28.189 00:56:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.189 00:56:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97405 00:22:28.189 killing process with pid 97405 00:22:28.189 00:56:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:28.189 00:56:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:28.189 00:56:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97405' 00:22:28.189 00:56:40 -- common/autotest_common.sh@955 -- # kill 97405 00:22:28.189 00:56:40 -- common/autotest_common.sh@960 -- # wait 97405 00:22:28.449 ************************************ 00:22:28.449 END TEST nvmf_digest_clean 00:22:28.449 ************************************ 00:22:28.449 00:22:28.449 real 0m16.598s 00:22:28.449 user 0m29.676s 00:22:28.449 sys 0m5.381s 00:22:28.449 00:56:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:28.449 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 00:56:40 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:28.449 00:56:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:28.449 00:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:28.449 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 ************************************ 00:22:28.449 START TEST nvmf_digest_error 00:22:28.449 ************************************ 00:22:28.449 00:56:40 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:28.449 00:56:40 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:28.449 00:56:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.449 00:56:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.449 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.449 00:56:40 -- nvmf/common.sh@469 -- # nvmfpid=97794 00:22:28.449 00:56:40 -- nvmf/common.sh@470 -- # waitforlisten 97794 00:22:28.449 00:56:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:28.449 00:56:40 -- common/autotest_common.sh@829 -- # '[' -z 97794 ']' 00:22:28.449 00:56:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.449 00:56:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.449 00:56:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.449 00:56:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.449 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 [2024-12-03 00:56:40.925743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:28.449 [2024-12-03 00:56:40.925839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.709 [2024-12-03 00:56:41.059238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.709 [2024-12-03 00:56:41.127708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:28.709 [2024-12-03 00:56:41.127867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.709 [2024-12-03 00:56:41.127880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.709 [2024-12-03 00:56:41.127888] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.709 [2024-12-03 00:56:41.127910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.642 00:56:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.642 00:56:41 -- common/autotest_common.sh@862 -- # return 0 00:22:29.642 00:56:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:29.642 00:56:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.642 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 00:56:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.642 00:56:41 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:29.642 00:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.642 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 [2024-12-03 00:56:41.908336] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:29.642 00:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.642 00:56:41 -- host/digest.sh@104 -- # common_target_config 00:22:29.642 00:56:41 -- host/digest.sh@43 -- # rpc_cmd 00:22:29.642 00:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.642 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 null0 00:22:29.642 [2024-12-03 00:56:42.013777] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.642 [2024-12-03 00:56:42.037901] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.642 00:56:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.642 00:56:42 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:29.642 00:56:42 -- host/digest.sh@54 -- # local rw bs qd 00:22:29.642 00:56:42 -- host/digest.sh@56 -- # rw=randread 00:22:29.642 00:56:42 -- host/digest.sh@56 -- # bs=4096 00:22:29.642 00:56:42 -- host/digest.sh@56 -- # qd=128 00:22:29.642 00:56:42 -- host/digest.sh@58 -- # bperfpid=97838 00:22:29.642 00:56:42 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:29.642 00:56:42 -- host/digest.sh@60 -- # waitforlisten 97838 /var/tmp/bperf.sock 00:22:29.642 00:56:42 -- common/autotest_common.sh@829 -- # '[' -z 97838 ']' 00:22:29.642 00:56:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:29.642 00:56:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.642 00:56:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:29.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:29.642 00:56:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.642 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 [2024-12-03 00:56:42.100942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:29.642 [2024-12-03 00:56:42.101209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97838 ] 00:22:29.901 [2024-12-03 00:56:42.244986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.901 [2024-12-03 00:56:42.316577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.836 00:56:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.836 00:56:43 -- common/autotest_common.sh@862 -- # return 0 00:22:30.836 00:56:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.836 00:56:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.836 00:56:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:30.836 00:56:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.836 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:30.836 00:56:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.836 00:56:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.836 00:56:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:31.094 nvme0n1 00:22:31.094 00:56:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:31.094 00:56:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.094 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:31.094 00:56:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.094 00:56:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:31.094 00:56:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:31.352 Running I/O for 2 seconds... 00:22:31.352 [2024-12-03 00:56:43.677729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.677786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.677805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.690577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.690610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.690632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.703450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.703493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.703517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.713542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.713576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.713597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.724182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.724215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.724239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.734398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.734439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.743997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.744029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.744051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.755363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.755396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.755407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.763114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.763146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.763169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.776356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.776389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.776400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.788915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.788947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.788971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.804592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.804626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.804645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.813663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.813708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.813719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.825866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.825898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.825909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.838125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.838158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.838176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.850354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.850394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.850427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.352 [2024-12-03 00:56:43.858764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.352 [2024-12-03 00:56:43.858795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.352 [2024-12-03 00:56:43.858806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.874484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.874537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.874548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.882689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.882721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.882743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.895360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.895392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.895403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.908244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.908276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.908299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.920101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.920144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.920165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.931730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.931773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.931784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.942564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.942607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.942629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.955649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.955692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.955712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.965764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.965795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.965806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.975218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.975251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.975262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.985601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.985644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.985655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:43.995434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:43.995476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:43.995496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.007267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.007311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.007331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.016163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.016194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.016205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.026275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.026306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.026317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.037238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.037300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.045739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.045771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.045782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.056118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.056149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.056160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.068449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.068491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.076506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.076548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.610 [2024-12-03 00:56:44.076569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.610 [2024-12-03 00:56:44.089600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.610 [2024-12-03 00:56:44.089643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.611 [2024-12-03 00:56:44.089664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.611 [2024-12-03 00:56:44.100719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.611 [2024-12-03 00:56:44.100752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.611 [2024-12-03 00:56:44.100764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.611 [2024-12-03 00:56:44.111078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.611 [2024-12-03 00:56:44.111110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.611 [2024-12-03 00:56:44.111121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.611 [2024-12-03 00:56:44.119880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.611 [2024-12-03 00:56:44.119912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.611 [2024-12-03 00:56:44.119923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.132907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.132950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.132961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.146641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.146671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.146682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.159171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.159203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.159214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.170714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.170759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.170781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.182678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.182732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.192582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.192625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.192646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.201422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.201452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.201463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.212613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.212657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.212669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.225267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.225299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.225322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.237272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.237304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.237315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.249911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.249945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.249968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.261061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.261092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.261102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.269511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.269542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.269552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.280834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.280867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.280891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.293263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.293296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.293320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.305504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.305536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.305557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.317998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.318043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.318063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.329145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.329177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.329188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.339035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.339067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.339089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.351248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.351280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.351303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.360334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.360366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.360389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.371324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.371357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.869 [2024-12-03 00:56:44.371378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.869 [2024-12-03 00:56:44.381300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:31.869 [2024-12-03 00:56:44.381332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.870 [2024-12-03 00:56:44.381342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.392585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.392627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.392648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.405436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.405467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.405478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.418067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.418099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.418109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.427753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.427783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.427806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.440390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.440434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.440446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.448449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.448491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.448502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.460318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.460349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.468024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.468066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.479758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.479790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.479812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.492428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.492470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.492491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.502404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.502459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.502479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.511179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.511210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.511221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.520657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.520689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.520700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.531064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.531096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.531116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.543208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.543240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.554615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.554646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.554668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.129 [2024-12-03 00:56:44.563858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.129 [2024-12-03 00:56:44.563890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.129 [2024-12-03 00:56:44.563914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.575001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.575032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.575056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.585400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.585441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.585453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.595921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.595953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.595975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.605292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.605323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.605346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.614762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.614794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.614804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.624769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.624801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.624812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.130 [2024-12-03 00:56:44.633732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.130 [2024-12-03 00:56:44.633763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.130 [2024-12-03 00:56:44.633785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.644633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.644676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.644697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.656300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.656342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.665903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.665935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.665946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.676239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.676271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.676294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.685361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.685393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.685403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.695940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.695973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.695983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.707203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.707235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.707246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.716926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.716958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.716969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.725108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.725139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.725150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.735314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.735346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.735367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.748389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.748430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.748443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.759109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.759140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.759150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.770398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.770443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.770471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.779024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.779056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.779066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.791023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.791054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.791077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.799422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.799451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.799475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.808821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.808864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.808875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.818291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.818322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.389 [2024-12-03 00:56:44.818332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.389 [2024-12-03 00:56:44.829444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.389 [2024-12-03 00:56:44.829476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.829499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.390 [2024-12-03 00:56:44.841746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.390 [2024-12-03 00:56:44.841777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.841787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.390 [2024-12-03 00:56:44.853070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.390 [2024-12-03 00:56:44.853101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.853125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.390 [2024-12-03 00:56:44.862266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.390 [2024-12-03 00:56:44.862298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.862308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.390 [2024-12-03 00:56:44.873825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.390 [2024-12-03 00:56:44.873857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.873880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.390 [2024-12-03 00:56:44.885284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.390 [2024-12-03 00:56:44.885315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.885337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.390 [2024-12-03 00:56:44.897477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.390 [2024-12-03 00:56:44.897508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.390 [2024-12-03 00:56:44.897518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.910624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.910666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.910689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.919035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.919066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.919077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.931186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.931217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.931228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.942772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.942804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.942827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.951553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.951584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.951606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.963841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.963873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.963883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.974554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.974585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.974596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.984000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.984031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.984054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:44.996396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:44.996437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:44.996448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.007613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.007645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.007655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.019344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.019377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.019388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.028127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.028159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.028170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.038055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.038087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.038097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.047755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.047786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.047797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.057530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.057561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.057571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.067432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.067464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.067488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.076765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.076809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.076820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.088626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.088669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.088691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.099302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.099334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.099357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.108821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.650 [2024-12-03 00:56:45.108854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.650 [2024-12-03 00:56:45.108864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.650 [2024-12-03 00:56:45.120405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.651 [2024-12-03 00:56:45.120445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.651 [2024-12-03 00:56:45.120456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.651 [2024-12-03 00:56:45.130784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.651 [2024-12-03 00:56:45.130816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.651 [2024-12-03 00:56:45.130827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.651 [2024-12-03 00:56:45.140565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.651 [2024-12-03 00:56:45.140607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.651 [2024-12-03 00:56:45.140619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.651 [2024-12-03 00:56:45.152929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.651 [2024-12-03 00:56:45.152962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.651 [2024-12-03 00:56:45.152972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.167317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.167347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.178467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.178498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.178509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.189216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.189258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.189281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.200258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.200301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.200312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.210438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.210480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.210502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.220685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.220728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.220748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.232220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.232252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.232263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.243397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.243443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.243463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.252320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.252362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.252382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.265279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.265323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.265334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.277109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.277140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.277151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.288886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.288918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.288929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.909 [2024-12-03 00:56:45.301036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.909 [2024-12-03 00:56:45.301067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.909 [2024-12-03 00:56:45.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.312609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.312642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.312653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.321622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.321653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.321664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.333998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.334037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.334047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.342923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.342966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.342987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.353352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.353384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.353395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.362612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.362654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.362675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.371905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.371937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.371948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.381546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.381578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.381589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.390817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.390869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.390879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.400798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.400830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.400842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.412348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.412380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.412402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.910 [2024-12-03 00:56:45.423249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:32.910 [2024-12-03 00:56:45.423282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.910 [2024-12-03 00:56:45.423293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.433217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.433249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.433260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.443739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.443770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.443793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.454976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.455008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.455031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.465111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.465143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.465153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.474306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.474337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.474348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.485866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.485897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.485921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.498677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.498722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.498733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.508729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.508761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.508771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.518983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.519015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.519027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.528985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.529016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.529039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.538667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.538700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.538711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.549519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.549550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.549560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.558281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.558324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.558345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.568253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.568285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.568306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.579308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.579341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.579364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.590037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.590069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.590093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.599156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.599188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.599209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.609028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.609060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.609071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.621332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.621363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.621374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.631009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.171 [2024-12-03 00:56:45.631040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.171 [2024-12-03 00:56:45.631064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.171 [2024-12-03 00:56:45.640869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.172 [2024-12-03 00:56:45.640900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.172 [2024-12-03 00:56:45.640911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.172 [2024-12-03 00:56:45.652903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x105a8d0) 00:22:33.172 [2024-12-03 00:56:45.652935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.172 [2024-12-03 00:56:45.652946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.172 00:22:33.172 Latency(us) 00:22:33.172 [2024-12-03T00:56:45.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.172 [2024-12-03T00:56:45.687Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:33.172 nvme0n1 : 2.00 23530.03 91.91 0.00 0.00 5435.42 2398.02 19065.02 00:22:33.172 [2024-12-03T00:56:45.687Z] =================================================================================================================== 00:22:33.172 [2024-12-03T00:56:45.687Z] Total : 23530.03 91.91 0.00 0.00 5435.42 2398.02 19065.02 00:22:33.172 0 00:22:33.172 00:56:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:33.448 00:56:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:33.448 00:56:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:33.448 | .driver_specific 00:22:33.448 | .nvme_error 00:22:33.448 | .status_code 00:22:33.448 | .command_transient_transport_error' 00:22:33.448 00:56:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:33.448 00:56:45 -- host/digest.sh@71 -- # (( 184 > 0 )) 00:22:33.448 00:56:45 -- host/digest.sh@73 -- # killprocess 97838 00:22:33.448 00:56:45 -- common/autotest_common.sh@936 -- # '[' -z 97838 ']' 00:22:33.448 00:56:45 -- common/autotest_common.sh@940 -- # kill -0 97838 00:22:33.448 00:56:45 -- common/autotest_common.sh@941 -- # uname 00:22:33.448 00:56:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:33.448 00:56:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97838 00:22:33.726 00:56:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:33.726 00:56:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:33.726 00:56:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97838' 00:22:33.726 killing process with pid 97838 00:22:33.726 00:56:45 -- common/autotest_common.sh@955 -- # kill 97838 00:22:33.726 Received shutdown signal, test time was about 2.000000 seconds 00:22:33.726 00:22:33.726 Latency(us) 00:22:33.726 [2024-12-03T00:56:46.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.726 [2024-12-03T00:56:46.241Z] =================================================================================================================== 00:22:33.726 [2024-12-03T00:56:46.241Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.726 00:56:45 -- common/autotest_common.sh@960 -- # wait 97838 00:22:33.726 00:56:46 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:33.726 00:56:46 -- host/digest.sh@54 -- # local rw bs qd 00:22:33.726 00:56:46 -- host/digest.sh@56 -- # rw=randread 00:22:33.726 00:56:46 -- host/digest.sh@56 -- # bs=131072 00:22:33.726 00:56:46 -- host/digest.sh@56 -- # qd=16 00:22:33.726 00:56:46 -- host/digest.sh@58 -- # bperfpid=97929 00:22:33.726 00:56:46 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:33.726 00:56:46 -- host/digest.sh@60 -- # waitforlisten 97929 /var/tmp/bperf.sock 00:22:33.726 00:56:46 -- common/autotest_common.sh@829 -- # '[' -z 97929 ']' 00:22:33.726 00:56:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:33.726 00:56:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:33.726 00:56:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:33.726 00:56:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.726 00:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.996 [2024-12-03 00:56:46.274671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:33.996 [2024-12-03 00:56:46.274784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97929 ] 00:22:33.996 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.996 Zero copy mechanism will not be used. 00:22:33.996 [2024-12-03 00:56:46.407813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.996 [2024-12-03 00:56:46.471454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.932 00:56:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.932 00:56:47 -- common/autotest_common.sh@862 -- # return 0 00:22:34.932 00:56:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:34.932 00:56:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:34.932 00:56:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:34.932 00:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.932 00:56:47 -- common/autotest_common.sh@10 -- # set +x 00:22:34.932 00:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.932 00:56:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.932 00:56:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:35.500 nvme0n1 00:22:35.500 00:56:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:35.500 00:56:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.500 00:56:47 -- common/autotest_common.sh@10 -- # set +x 00:22:35.500 00:56:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.500 00:56:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:35.500 00:56:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:35.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:35.500 Zero copy mechanism will not be used. 00:22:35.500 Running I/O for 2 seconds... 00:22:35.500 [2024-12-03 00:56:47.878686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.878743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.878761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.882290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.882322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.882333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.886755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.886787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.886797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.890551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.890584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.890603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.894926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.894958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.894969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.898643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.898675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.898698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.902379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.902441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.906068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.906100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.906122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.909322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.909354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.909365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.912819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.912851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.912862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.916305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.916345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.916369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.919637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.919668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.919678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.923170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.923203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.923224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.927222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.927254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.927278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.930644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.930676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.930697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.934119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.934150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.934179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.937861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.937892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.937916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.941041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.941073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.941096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.944545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.944577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.948635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.948668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.948688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.952457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.952488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.952511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.956053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.956084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.956106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.959325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.959356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.959376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.962404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.962442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.962454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.965959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.965990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.966000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.970195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.970235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-03 00:56:47.970246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.501 [2024-12-03 00:56:47.974537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.501 [2024-12-03 00:56:47.974581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.974601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.977710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.977741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.977763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.981013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.981044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.981066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.985286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.985317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.985327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.988510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.988542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.988563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.992273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.992305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.992326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.995259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.995290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.995312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:47.999460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:47.999491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:47.999513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:48.002568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:48.002610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:48.002630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:48.006272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:48.006303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:48.006324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.502 [2024-12-03 00:56:48.010525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.502 [2024-12-03 00:56:48.010574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-03 00:56:48.010597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.015373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.015403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.015434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.019801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.019831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.019854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.023374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.023407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.023439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.026385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.026430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.026449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.030852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.030884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.030895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.034193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.034235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.034255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.038015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.038047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.038070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.041145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.041177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.041200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.763 [2024-12-03 00:56:48.045156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.763 [2024-12-03 00:56:48.045188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.763 [2024-12-03 00:56:48.045211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.048439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.048477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.048487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.051741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.051774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.051793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.055558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.055591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.055602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.059071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.059102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.059124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.061914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.061945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.061956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.065499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.065531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.065554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.069280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.069313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.069334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.072778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.072810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.072830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.075971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.076004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.076025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.079618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.079648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.079671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.083209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.083242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.083266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.087180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.087212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.087234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.090193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.090224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.090244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.093771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.093802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.093823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.097045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.097077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.097088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.100527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.100558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.100569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.103861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.103892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.103915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.107315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.107348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.107369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.111205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.111237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.111261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.114558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.114589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.114604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.117538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.117567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.117590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.121318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.121349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.121370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.124983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.125014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.125025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.128952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.128983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.129006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.133003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.133034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.133058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.136688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.136717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.764 [2024-12-03 00:56:48.136739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.764 [2024-12-03 00:56:48.141192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.764 [2024-12-03 00:56:48.141222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.141245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.144989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.145020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.145043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.148684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.148717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.148738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.152163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.152195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.152218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.156024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.156056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.156077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.159470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.159501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.159520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.162927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.162959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.162970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.166181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.166212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.166223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.169547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.169579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.169601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.173072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.173103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.173125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.176923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.176955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.176965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.180358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.180391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.180422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.183909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.183942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.183952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.187515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.187548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.187568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.190642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.190674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.190694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.194606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.194637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.194656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.198069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.198100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.198122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.202002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.202033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.202043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.205802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.205833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.205843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.208582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.208613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.208636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.212982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.213014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.213024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.215979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.216024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.216035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.219777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.219823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.219835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.223063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.223094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.223104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.227760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.227802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.231979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.232021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.232044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.235602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.235634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.765 [2024-12-03 00:56:48.235654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.765 [2024-12-03 00:56:48.239664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.765 [2024-12-03 00:56:48.239696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.239707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.243731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.243771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.243781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.247831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.247862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.247873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.251784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.251815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.251826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.255217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.255248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.255271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.258983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.259014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.259025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.262663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.262695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.262715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.266049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.266102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.269533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.269565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.269587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.766 [2024-12-03 00:56:48.273380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:35.766 [2024-12-03 00:56:48.273434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.766 [2024-12-03 00:56:48.273457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.027 [2024-12-03 00:56:48.276697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.027 [2024-12-03 00:56:48.276737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.027 [2024-12-03 00:56:48.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.027 [2024-12-03 00:56:48.280780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.027 [2024-12-03 00:56:48.280812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.027 [2024-12-03 00:56:48.280822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.027 [2024-12-03 00:56:48.284663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.027 [2024-12-03 00:56:48.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.027 [2024-12-03 00:56:48.284717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.027 [2024-12-03 00:56:48.288247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.027 [2024-12-03 00:56:48.288278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.027 [2024-12-03 00:56:48.288288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.027 [2024-12-03 00:56:48.291522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.027 [2024-12-03 00:56:48.291554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.291577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.294724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.294756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.294767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.298697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.298728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.298750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.301815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.301846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.301856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.305381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.305431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.305443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.308890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.308921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.308945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.312780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.312813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.312824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.316191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.316223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.316234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.319605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.319654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.319676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.323082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.323113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.323137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.326347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.326380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.326401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.329759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.329790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.329813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.333522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.333554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.333579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.336782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.336814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.336836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.340763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.340796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.340807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.344002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.344034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.344055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.347685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.347718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.347737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.350788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.350819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.350830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.354561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.354593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.354604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.358369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.358402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.358424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.361608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.361639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.361661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.364767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.364800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.364819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.368801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.368843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.372459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.372489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.372509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.375775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.375806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.375829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.379727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.379759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.379780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.028 [2024-12-03 00:56:48.383301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.028 [2024-12-03 00:56:48.383333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.028 [2024-12-03 00:56:48.383343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.387664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.387697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.387720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.390732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.390764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.390787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.394035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.394089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.397710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.397741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.397762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.401308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.401341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.401362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.404989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.405020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.405043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.408162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.408193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.408217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.411224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.411255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.411278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.415260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.415291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.415314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.418689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.418722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.418742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.421995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.422026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.422037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.425505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.425536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.425559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.429358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.429390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.429401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.432586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.432618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.432641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.436490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.436519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.436542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.440068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.440100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.440121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.444273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.444315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.444335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.447804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.447836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.447846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.450826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.450857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.450868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.454932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.454963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.454986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.458378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.458421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.458434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.462272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.462303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.462323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.465742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.465772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.465792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.470015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.470046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.470067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.473629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.473660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.473681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.476454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.476483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.029 [2024-12-03 00:56:48.476493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.029 [2024-12-03 00:56:48.480385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.029 [2024-12-03 00:56:48.480425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.480446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.484236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.484266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.487989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.488021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.488044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.491561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.491593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.491614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.495176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.495208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.495229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.498784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.498815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.498839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.501488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.501518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.501538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.505771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.505802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.505825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.509144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.509175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.509199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.513076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.513108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.513129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.516672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.516703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.520535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.520566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.520588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.524109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.524142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.524164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.528052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.528083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.528094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.531370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.531401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.531432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.534974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.535018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.535038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.030 [2024-12-03 00:56:48.539068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.030 [2024-12-03 00:56:48.539098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-12-03 00:56:48.539121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.542083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.542112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.542136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.546191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.546227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.546246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.549811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.549842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.549863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.553531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.553563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.553586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.557296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.557328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.557349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.560545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.560576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.560596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.563934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.563965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.563987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.567389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.567432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.567452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.571114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.571146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.571168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.574914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.574946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.574969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.578769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.578800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.578823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.581535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.581565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.581587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.585379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.585434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.585447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.588420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.588458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.291 [2024-12-03 00:56:48.592241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.291 [2024-12-03 00:56:48.592271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.291 [2024-12-03 00:56:48.592294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.596189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.596220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.596242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.600159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.600191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.600214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.603748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.603780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.603804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.607288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.607320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.607342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.610898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.610929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.610940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.614019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.614049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.614071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.617555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.617587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.617609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.621285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.621317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.621341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.624944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.624977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.624998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.628908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.628939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.628949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.632666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.632697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.632719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.636587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.636617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.636627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.639344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.639375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.639396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.643256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.643286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.643309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.647397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.647435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.647455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.651333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.651364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.651385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.655342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.655374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.655397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.658969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.659001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.659024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.662307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.662338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.662358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.665173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.665203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.665224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.668867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.668899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.668920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.672765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.672797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.676487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.676518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.676529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.680554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.680586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.680607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.683851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.292 [2024-12-03 00:56:48.683883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.292 [2024-12-03 00:56:48.683893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.292 [2024-12-03 00:56:48.686967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.686997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.687020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.690491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.690539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.690560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.694002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.694032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.694043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.697500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.697532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.697553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.700740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.700771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.700793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.704354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.704386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.704421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.707833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.707864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.707886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.711128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.711160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.711182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.714510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.714551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.714569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.718045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.718076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.718096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.721715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.721746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.721767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.725020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.725074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.728794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.728826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.728837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.731740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.731771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.731782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.735874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.735905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.735929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.738899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.738931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.738953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.742365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.742397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.742428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.745634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.745665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.745686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.749559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.749592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.749611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.752577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.752607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.752630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.756342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.756373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.756396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.760357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.760389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.760423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.764441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.764472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.764495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.767656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.767689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.767699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.770814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.770846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.770869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.774597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.774629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.774640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.777630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.293 [2024-12-03 00:56:48.777661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.293 [2024-12-03 00:56:48.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.293 [2024-12-03 00:56:48.781393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.294 [2024-12-03 00:56:48.781435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.294 [2024-12-03 00:56:48.781455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.294 [2024-12-03 00:56:48.784951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.294 [2024-12-03 00:56:48.784984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.294 [2024-12-03 00:56:48.785005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.294 [2024-12-03 00:56:48.788559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.294 [2024-12-03 00:56:48.788591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.294 [2024-12-03 00:56:48.788612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.294 [2024-12-03 00:56:48.792466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.294 [2024-12-03 00:56:48.792507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.294 [2024-12-03 00:56:48.792529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.294 [2024-12-03 00:56:48.795686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.294 [2024-12-03 00:56:48.795716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.294 [2024-12-03 00:56:48.795737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.294 [2024-12-03 00:56:48.800597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.294 [2024-12-03 00:56:48.800637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.294 [2024-12-03 00:56:48.800660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.805118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.805148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.805159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.808479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.808510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.812773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.812811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.812821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.817441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.817469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.817480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.820944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.820975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.820986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.824987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.825018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.825029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.828287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.828331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.828351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.832090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.832134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.832154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.835403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.835461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.835479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.839461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.839491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.842920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.842964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.842983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.846763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.846795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.846813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.850751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.850782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.850804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.854883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.854918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.854944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.858576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.858609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.858629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.555 [2024-12-03 00:56:48.862015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.555 [2024-12-03 00:56:48.862059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.555 [2024-12-03 00:56:48.862079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.865848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.865881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.865892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.869421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.869462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.869482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.872341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.872383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.872404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.876549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.876592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.876611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.879965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.879997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.880007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.884154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.884184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.884195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.888112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.888143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.888153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.891919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.891950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.891964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.895615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.895647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.895667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.899522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.899575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.902306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.902349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.906133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.906181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.906197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.910483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.910526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.910537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.914210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.914241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.914262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.917070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.917100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.920614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.920648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.920669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.924259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.924303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.924324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.928027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.928066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.928090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.932422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.932451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.932461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.936049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.936093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.556 [2024-12-03 00:56:48.936113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.556 [2024-12-03 00:56:48.939660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.556 [2024-12-03 00:56:48.939692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.939703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.943565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.943608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.943628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.947018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.947056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.947067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.950718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.950750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.950771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.954741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.954774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.958283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.958314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.958336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.962602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.962635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.962653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.966836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.966875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.966886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.970474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.970505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.970515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.973545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.973574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.973584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.976959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.976997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.977018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.981143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.981183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.981194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.985070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.985111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.985121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.988490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.988519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.988540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.992274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.992317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.992337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.995767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.995800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.995810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:48.999200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:48.999231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:48.999254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:49.002376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:49.002408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:49.002432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:49.006344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:49.006377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:49.006387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:49.010381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:49.010423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:49.010435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:49.014702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:49.014733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:49.014757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:49.018398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.557 [2024-12-03 00:56:49.018439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.557 [2024-12-03 00:56:49.018460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.557 [2024-12-03 00:56:49.021779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.021811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.021832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.025239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.025271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.025295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.028156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.028187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.028211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.032034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.032065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.032086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.036326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.036359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.036381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.039782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.039812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.039834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.042704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.042736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.042757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.046429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.046459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.046479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.050629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.050659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.050680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.054297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.054328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.054348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.057586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.057617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.057641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.061477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.061509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.061533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.558 [2024-12-03 00:56:49.065064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.558 [2024-12-03 00:56:49.065096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.558 [2024-12-03 00:56:49.065110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.068488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.068532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.068552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.072474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.072505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.072529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.075892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.075924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.075935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.080073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.080105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.080128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.083570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.083603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.083624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.086971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.087002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.087013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.819 [2024-12-03 00:56:49.089755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.819 [2024-12-03 00:56:49.089786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.819 [2024-12-03 00:56:49.089808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.093304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.093336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.093357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.097159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.097192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.097215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.101120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.101152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.101176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.105187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.105218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.105229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.108717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.108748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.108769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.111881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.111913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.111934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.115600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.115632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.115654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.119463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.119493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.119516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.122870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.122903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.122924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.126111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.126142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.126165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.130007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.130038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.130061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.133448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.133476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.133497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.137240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.137270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.137292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.141297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.141329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.141353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.145124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.145155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.148423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.148453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.148464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.151982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.152015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.152037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.155858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.155889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.155899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.159253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.159285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.159309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.162731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.162763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.162773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.820 [2024-12-03 00:56:49.166264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.820 [2024-12-03 00:56:49.166297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.820 [2024-12-03 00:56:49.166316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.169604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.169634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.169644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.173292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.173323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.173345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.176788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.176820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.176831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.180238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.180269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.180292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.184119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.184152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.184174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.187471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.187515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.187535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.191084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.191115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.191138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.194253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.194284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.194304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.197975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.198005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.198028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.201795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.201825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.201835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.205765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.205795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.205806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.209269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.209300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.209311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.213056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.213087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.213098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.216312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.216343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.216366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.219645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.219676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.219695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.223219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.223252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.223274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.227081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.227113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.227124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.230373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.230404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.230426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.234514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.234557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.234567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.237366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.821 [2024-12-03 00:56:49.237397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.821 [2024-12-03 00:56:49.237407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.821 [2024-12-03 00:56:49.241259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.241290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.241300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.245493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.245532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.245556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.250145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.250187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.250208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.254972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.255004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.255023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.259062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.259094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.259118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.262606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.262637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.262658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.266539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.266579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.266600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.270424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.270453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.270474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.273872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.273903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.273915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.277527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.277557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.277568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.280825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.280856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.280868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.284899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.284931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.284954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.288462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.288493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.288516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.292243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.292275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.292296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.296314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.296344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.296368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.300610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.300641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.300651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.304267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.304297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.304321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.308620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.308652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.308675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.312313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.312343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.312365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.315993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.316023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.316033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.318906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.318939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.318949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.822 [2024-12-03 00:56:49.322681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.822 [2024-12-03 00:56:49.322727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.822 [2024-12-03 00:56:49.322738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.823 [2024-12-03 00:56:49.326226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.823 [2024-12-03 00:56:49.326258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.823 [2024-12-03 00:56:49.326278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.823 [2024-12-03 00:56:49.330460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:36.823 [2024-12-03 00:56:49.330492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.823 [2024-12-03 00:56:49.330503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.334141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.334177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.334195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.338292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.338324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.338344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.341465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.341494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.341504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.345491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.345544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.348548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.348579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.348598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.353162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.353192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.353216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.356560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.356591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.356612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.360099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.360130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.360153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.363717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.363748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.363771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.367009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.367040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.367062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.370537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.370577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.370601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.374095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.374126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.374149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.378127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.378160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.378190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.381313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.381345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.381368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.384646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.384690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.384701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.388488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.388520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.388542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.391374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.085 [2024-12-03 00:56:49.391406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.085 [2024-12-03 00:56:49.391439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.085 [2024-12-03 00:56:49.395508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.395536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.395555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.398989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.399020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.399031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.402678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.402710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.402732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.406641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.406671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.406682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.410184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.410226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.410238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.413336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.413367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.413390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.417029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.417060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.417082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.419980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.420012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.420022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.423986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.424019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.424043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.427866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.427899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.431320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.431351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.431362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.435182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.435213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.435223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.438802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.438834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.438844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.441606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.441637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.441657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.445173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.445204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.445226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.449159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.449189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.449211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.453176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.453206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.453228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.457084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.457116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.457137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.460625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.460656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.460677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.463404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.463448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.463470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.467133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.086 [2024-12-03 00:56:49.467165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.086 [2024-12-03 00:56:49.467175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.086 [2024-12-03 00:56:49.470961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.470992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.471004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.473814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.473845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.473855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.477542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.477573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.477583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.481400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.481463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.484377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.484406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.484430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.488334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.488365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.488387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.492288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.492319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.492340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.496062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.496093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.496115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.499908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.499940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.499950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.502988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.503020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.503044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.506706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.506738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.506749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.510335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.510388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.513885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.513916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.513926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.517341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.517373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.517384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.520686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.520718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.520739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.524579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.524620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.524641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.528301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.528332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.528355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.531193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.531226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.531247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.534594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.534627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.534647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.537888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.537918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.537929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.541839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.087 [2024-12-03 00:56:49.541872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.087 [2024-12-03 00:56:49.541883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.087 [2024-12-03 00:56:49.544948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.544981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.544991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.548586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.548619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.548641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.552042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.552073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.552096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.555785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.555818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.555839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.559514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.559552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.559571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.563467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.563497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.563518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.565990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.566019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.566042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.570641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.570673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.570683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.573982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.574013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.574034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.577517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.577547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.577569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.580938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.580969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.580992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.584752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.584784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.584806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.588373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.588404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.588437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.591847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.591890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.591913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.088 [2024-12-03 00:56:49.595784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.088 [2024-12-03 00:56:49.595824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.088 [2024-12-03 00:56:49.595847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.599530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.599561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.599582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.603163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.603195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.603217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.606856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.606889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.606899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.610541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.610583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.610603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.614823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.614852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.614863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.618466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.618498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.618518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.622058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.622089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.622113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.625563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.628859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.628891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.628902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.632734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.632765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.632776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.636312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.636344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.636365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.640023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.640054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.640077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.643639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.643670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.643691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.647226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.647257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.647281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.650303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.650335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.650356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.350 [2024-12-03 00:56:49.654328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.350 [2024-12-03 00:56:49.654361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.350 [2024-12-03 00:56:49.654381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.658305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.658337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.658356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.662201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.662238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.662259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.665814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.665844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.665855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.669688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.669717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.669728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.672805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.672836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.672846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.676378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.676409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.676443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.679936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.679968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.679990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.683083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.683114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.683135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.686855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.686895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.690206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.690240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.690255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.693220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.693249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.693272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.697521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.697551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.697574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.701552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.701605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.704585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.704616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.704638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.708500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.708530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.708553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.712487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.712518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.712540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.716273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.716305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.716329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.719922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.719954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.719964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.723894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.723924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.723934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.727171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.351 [2024-12-03 00:56:49.727203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.351 [2024-12-03 00:56:49.727226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.351 [2024-12-03 00:56:49.730278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.730311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.730322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.733738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.733769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.733789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.737257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.737288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.737312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.740603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.740647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.740667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.744230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.744261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.744282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.747653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.747685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.747708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.751060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.751092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.751115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.754860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.754892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.754903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.758199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.758238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.758259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.762032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.762075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.762085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.765785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.765817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.765839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.770009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.770040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.770061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.773764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.773795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.773817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.777357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.777386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.777409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.781087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.781118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.781140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.784006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.784037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.784060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.788169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.788201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.788222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.791589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.791621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.791642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.794698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.794740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.794750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.798518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.798562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.798573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.801708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.801738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.352 [2024-12-03 00:56:49.801761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.352 [2024-12-03 00:56:49.805677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.352 [2024-12-03 00:56:49.805708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.805729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.809024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.809053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.811985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.812016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.812037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.815401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.815443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.815466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.819059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.819090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.822921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.822976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.826189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.826229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.826240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.829967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.829998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.830021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.833245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.833277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.833298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.837431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.837462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.837483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.841133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.841164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.841186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.845306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.845337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.845358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.849093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.849123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.849144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.852249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.852279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.852302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.855581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.855614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.855633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.353 [2024-12-03 00:56:49.858952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.353 [2024-12-03 00:56:49.858984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.353 [2024-12-03 00:56:49.859007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.613 [2024-12-03 00:56:49.863841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.613 [2024-12-03 00:56:49.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.613 [2024-12-03 00:56:49.863883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.613 [2024-12-03 00:56:49.866755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.613 [2024-12-03 00:56:49.866787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.613 [2024-12-03 00:56:49.866798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.613 [2024-12-03 00:56:49.870740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.613 [2024-12-03 00:56:49.870772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.613 [2024-12-03 00:56:49.870782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.613 [2024-12-03 00:56:49.874139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3bd10) 00:22:37.613 [2024-12-03 00:56:49.874178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.613 [2024-12-03 00:56:49.874189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.613 00:22:37.613 Latency(us) 00:22:37.613 [2024-12-03T00:56:50.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.613 [2024-12-03T00:56:50.128Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:37.613 nvme0n1 : 2.00 8524.78 1065.60 0.00 0.00 1873.80 692.60 5898.24 00:22:37.613 [2024-12-03T00:56:50.128Z] =================================================================================================================== 00:22:37.613 [2024-12-03T00:56:50.128Z] Total : 8524.78 1065.60 0.00 0.00 1873.80 692.60 5898.24 00:22:37.613 0 00:22:37.613 00:56:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:37.613 00:56:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:37.613 00:56:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:37.613 | .driver_specific 00:22:37.613 | .nvme_error 00:22:37.613 | .status_code 00:22:37.613 | .command_transient_transport_error' 00:22:37.613 00:56:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:37.872 00:56:50 -- host/digest.sh@71 -- # (( 550 > 0 )) 00:22:37.872 00:56:50 -- host/digest.sh@73 -- # killprocess 97929 00:22:37.872 00:56:50 -- common/autotest_common.sh@936 -- # '[' -z 97929 ']' 00:22:37.872 00:56:50 -- common/autotest_common.sh@940 -- # kill -0 97929 00:22:37.872 00:56:50 -- common/autotest_common.sh@941 -- # uname 00:22:37.872 00:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:37.872 00:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97929 00:22:37.872 00:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:37.872 killing process with pid 97929 00:22:37.872 00:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:37.872 00:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97929' 00:22:37.873 Received shutdown signal, test time was about 2.000000 seconds 00:22:37.873 00:22:37.873 Latency(us) 00:22:37.873 [2024-12-03T00:56:50.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.873 [2024-12-03T00:56:50.388Z] =================================================================================================================== 00:22:37.873 [2024-12-03T00:56:50.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.873 00:56:50 -- common/autotest_common.sh@955 -- # kill 97929 00:22:37.873 00:56:50 -- common/autotest_common.sh@960 -- # wait 97929 00:22:38.132 00:56:50 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:38.132 00:56:50 -- host/digest.sh@54 -- # local rw bs qd 00:22:38.132 00:56:50 -- host/digest.sh@56 -- # rw=randwrite 00:22:38.132 00:56:50 -- host/digest.sh@56 -- # bs=4096 00:22:38.132 00:56:50 -- host/digest.sh@56 -- # qd=128 00:22:38.132 00:56:50 -- host/digest.sh@58 -- # bperfpid=98019 00:22:38.132 00:56:50 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:38.132 00:56:50 -- host/digest.sh@60 -- # waitforlisten 98019 /var/tmp/bperf.sock 00:22:38.132 00:56:50 -- common/autotest_common.sh@829 -- # '[' -z 98019 ']' 00:22:38.132 00:56:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:38.132 00:56:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:38.132 00:56:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:38.132 00:56:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.132 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:22:38.132 [2024-12-03 00:56:50.480606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:38.132 [2024-12-03 00:56:50.480692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98019 ] 00:22:38.132 [2024-12-03 00:56:50.610595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.390 [2024-12-03 00:56:50.670437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.958 00:56:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.958 00:56:51 -- common/autotest_common.sh@862 -- # return 0 00:22:38.958 00:56:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:38.958 00:56:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:39.217 00:56:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:39.217 00:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.217 00:56:51 -- common/autotest_common.sh@10 -- # set +x 00:22:39.217 00:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.217 00:56:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.217 00:56:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.476 nvme0n1 00:22:39.477 00:56:51 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:39.477 00:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.477 00:56:51 -- common/autotest_common.sh@10 -- # set +x 00:22:39.736 00:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.737 00:56:51 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:39.737 00:56:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:39.737 Running I/O for 2 seconds... 00:22:39.737 [2024-12-03 00:56:52.091020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eea00 00:22:39.737 [2024-12-03 00:56:52.091951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.092015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.100665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e2c28 00:22:39.737 [2024-12-03 00:56:52.100955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.100987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.112371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190edd58 00:22:39.737 [2024-12-03 00:56:52.113499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.113540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.119758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eee38 00:22:39.737 [2024-12-03 00:56:52.120629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.120659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.129210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ddc00 00:22:39.737 [2024-12-03 00:56:52.130606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.130653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.139290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f3a28 00:22:39.737 [2024-12-03 00:56:52.140142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.140172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.148620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f3a28 00:22:39.737 [2024-12-03 00:56:52.149232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.149272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.157960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190de8a8 00:22:39.737 [2024-12-03 00:56:52.158599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.158651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.167278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f46d0 00:22:39.737 [2024-12-03 00:56:52.167860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.167899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.176612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fac10 00:22:39.737 [2024-12-03 00:56:52.177226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.177265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.185991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f5378 00:22:39.737 [2024-12-03 00:56:52.186658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.186701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.195602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e99d8 00:22:39.737 [2024-12-03 00:56:52.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.196613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.204777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7100 00:22:39.737 [2024-12-03 00:56:52.205789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.205818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.214162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f1ca0 00:22:39.737 [2024-12-03 00:56:52.214529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.214558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.223610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e5220 00:22:39.737 [2024-12-03 00:56:52.224340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.224380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.233209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f5be8 00:22:39.737 [2024-12-03 00:56:52.233861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.233901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.737 [2024-12-03 00:56:52.243938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ea680 00:22:39.737 [2024-12-03 00:56:52.245057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.737 [2024-12-03 00:56:52.245087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.252832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ef6a8 00:22:39.997 [2024-12-03 00:56:52.253855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.253896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.262849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6cc8 00:22:39.997 [2024-12-03 00:56:52.263960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.263989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.272462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7100 00:22:39.997 [2024-12-03 00:56:52.272945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.272974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.283091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ecc78 00:22:39.997 [2024-12-03 00:56:52.284599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.284640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.292680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e6fa8 00:22:39.997 [2024-12-03 00:56:52.293634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.293674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.301489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e8088 00:22:39.997 [2024-12-03 00:56:52.302264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.302294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.311646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ee5c8 00:22:39.997 [2024-12-03 00:56:52.313074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.313115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.322850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e88f8 00:22:39.997 [2024-12-03 00:56:52.323614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.323656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.334345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ef270 00:22:39.997 [2024-12-03 00:56:52.335864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.335893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.343649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e73e0 00:22:39.997 [2024-12-03 00:56:52.344607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.344638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.352580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ea680 00:22:39.997 [2024-12-03 00:56:52.352875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.997 [2024-12-03 00:56:52.352904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:39.997 [2024-12-03 00:56:52.361967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f81e0 00:22:39.998 [2024-12-03 00:56:52.363014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.363043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.371377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fdeb0 00:22:39.998 [2024-12-03 00:56:52.371714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.371744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.381036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e9e10 00:22:39.998 [2024-12-03 00:56:52.381919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.381947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.390587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e7818 00:22:39.998 [2024-12-03 00:56:52.391687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.391716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.400416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e6fa8 00:22:39.998 [2024-12-03 00:56:52.401143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.401185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.412930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ee190 00:22:39.998 [2024-12-03 00:56:52.414117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.414158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.421860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fa3a0 00:22:39.998 [2024-12-03 00:56:52.422385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.422426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.432119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ebfd0 00:22:39.998 [2024-12-03 00:56:52.432510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.432539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.441048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e7c50 00:22:39.998 [2024-12-03 00:56:52.441336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.441364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.451419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e01f8 00:22:39.998 [2024-12-03 00:56:52.452246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.452283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.461035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eea00 00:22:39.998 [2024-12-03 00:56:52.461161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.461180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.471985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7100 00:22:39.998 [2024-12-03 00:56:52.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.473221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.482238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190df988 00:22:39.998 [2024-12-03 00:56:52.483595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.483636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.492535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e0ea0 00:22:39.998 [2024-12-03 00:56:52.493323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.493363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.998 [2024-12-03 00:56:52.501135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ed0b0 00:22:39.998 [2024-12-03 00:56:52.501778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.998 [2024-12-03 00:56:52.501813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:40.257 [2024-12-03 00:56:52.512862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ebb98 00:22:40.257 [2024-12-03 00:56:52.513471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.257 [2024-12-03 00:56:52.513510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:40.257 [2024-12-03 00:56:52.521285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f9f68 00:22:40.257 [2024-12-03 00:56:52.522061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.257 [2024-12-03 00:56:52.522090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:40.257 [2024-12-03 00:56:52.531518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e5a90 00:22:40.257 [2024-12-03 00:56:52.531616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.257 [2024-12-03 00:56:52.531634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:40.257 [2024-12-03 00:56:52.541619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6458 00:22:40.257 [2024-12-03 00:56:52.542242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.542282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.551611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3498 00:22:40.258 [2024-12-03 00:56:52.552238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.552274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.562158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3498 00:22:40.258 [2024-12-03 00:56:52.563567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.563609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.572326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f8e88 00:22:40.258 [2024-12-03 00:56:52.573623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.573663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.582422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f4298 00:22:40.258 [2024-12-03 00:56:52.583190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.583224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.591967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f92c0 00:22:40.258 [2024-12-03 00:56:52.593038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.593066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.601454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e01f8 00:22:40.258 [2024-12-03 00:56:52.602523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.602562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.611049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e5220 00:22:40.258 [2024-12-03 00:56:52.611859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.611887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.619927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e0ea0 00:22:40.258 [2024-12-03 00:56:52.620546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.620584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.629157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3498 00:22:40.258 [2024-12-03 00:56:52.630317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.630358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.638743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e5220 00:22:40.258 [2024-12-03 00:56:52.639201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.639231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.648205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e7c50 00:22:40.258 [2024-12-03 00:56:52.648672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.648701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.657399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e7818 00:22:40.258 [2024-12-03 00:56:52.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.658353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.666991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fef90 00:22:40.258 [2024-12-03 00:56:52.667430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.676492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e6b70 00:22:40.258 [2024-12-03 00:56:52.677326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.677353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.686057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eaab8 00:22:40.258 [2024-12-03 00:56:52.686982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.687012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.695579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eea00 00:22:40.258 [2024-12-03 00:56:52.696766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.696794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.703965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6020 00:22:40.258 [2024-12-03 00:56:52.704268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.704295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.714453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3d08 00:22:40.258 [2024-12-03 00:56:52.715308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.715336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.722954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fd640 00:22:40.258 [2024-12-03 00:56:52.723489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.723518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.732545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190de038 00:22:40.258 [2024-12-03 00:56:52.733227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.733267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.742202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fe720 00:22:40.258 [2024-12-03 00:56:52.742917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.742956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.751762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f2510 00:22:40.258 [2024-12-03 00:56:52.752799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.752828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:40.258 [2024-12-03 00:56:52.761882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fc998 00:22:40.258 [2024-12-03 00:56:52.763311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.258 [2024-12-03 00:56:52.763340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.772851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f57b0 00:22:40.518 [2024-12-03 00:56:52.773802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.773831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.781593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f35f0 00:22:40.518 [2024-12-03 00:56:52.781908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.781937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.791055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3d08 00:22:40.518 [2024-12-03 00:56:52.791745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.791785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.800459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e7818 00:22:40.518 [2024-12-03 00:56:52.800738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.800767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.810010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6020 00:22:40.518 [2024-12-03 00:56:52.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.810893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.819444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f8a50 00:22:40.518 [2024-12-03 00:56:52.820558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.820587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.828532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f81e0 00:22:40.518 [2024-12-03 00:56:52.828798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.828826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.838064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f4b08 00:22:40.518 [2024-12-03 00:56:52.838960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.838988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.847428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f5be8 00:22:40.518 [2024-12-03 00:56:52.847747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.847776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.856843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eb760 00:22:40.518 [2024-12-03 00:56:52.857687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.857716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.866161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e8088 00:22:40.518 [2024-12-03 00:56:52.866991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.867019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.875611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eea00 00:22:40.518 [2024-12-03 00:56:52.876026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.876055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.885106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7970 00:22:40.518 [2024-12-03 00:56:52.885644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.885684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.894708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190dece0 00:22:40.518 [2024-12-03 00:56:52.895335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.895375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.904110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e9e10 00:22:40.518 [2024-12-03 00:56:52.904671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.904711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.913374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e88f8 00:22:40.518 [2024-12-03 00:56:52.913902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.913930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.922675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e8d30 00:22:40.518 [2024-12-03 00:56:52.923209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.923247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.932032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190df550 00:22:40.518 [2024-12-03 00:56:52.932506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.932534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.941276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190dece0 00:22:40.518 [2024-12-03 00:56:52.941728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.941756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.950595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f20d8 00:22:40.518 [2024-12-03 00:56:52.951017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.951045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.959816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fa3a0 00:22:40.518 [2024-12-03 00:56:52.960204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.960233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:40.518 [2024-12-03 00:56:52.969156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190dfdc0 00:22:40.518 [2024-12-03 00:56:52.969869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.518 [2024-12-03 00:56:52.969907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:40.519 [2024-12-03 00:56:52.978055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e12d8 00:22:40.519 [2024-12-03 00:56:52.978734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.519 [2024-12-03 00:56:52.978775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:40.519 [2024-12-03 00:56:52.987838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ef270 00:22:40.519 [2024-12-03 00:56:52.988942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.519 [2024-12-03 00:56:52.988970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.519 [2024-12-03 00:56:52.996811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3498 00:22:40.519 [2024-12-03 00:56:52.997285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.519 [2024-12-03 00:56:52.997313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:40.519 [2024-12-03 00:56:53.006487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ec840 00:22:40.519 [2024-12-03 00:56:53.007354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.519 [2024-12-03 00:56:53.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.519 [2024-12-03 00:56:53.016872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f81e0 00:22:40.519 [2024-12-03 00:56:53.017565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.519 [2024-12-03 00:56:53.017606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.519 [2024-12-03 00:56:53.026172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e27f0 00:22:40.519 [2024-12-03 00:56:53.026873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.519 [2024-12-03 00:56:53.026920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.036724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eb760 00:22:40.778 [2024-12-03 00:56:53.037470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.037509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.045336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e27f0 00:22:40.778 [2024-12-03 00:56:53.046107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.046148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.054902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f1430 00:22:40.778 [2024-12-03 00:56:53.055475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.065099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e27f0 00:22:40.778 [2024-12-03 00:56:53.065851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.065891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.073193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ddc00 00:22:40.778 [2024-12-03 00:56:53.074161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.074209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.082658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e95a0 00:22:40.778 [2024-12-03 00:56:53.082961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.082990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.092263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ed920 00:22:40.778 [2024-12-03 00:56:53.092714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.092739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.100438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ff3c8 00:22:40.778 [2024-12-03 00:56:53.100563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.100581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.111532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fbcf0 00:22:40.778 [2024-12-03 00:56:53.112156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.112197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.120897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190dece0 00:22:40.778 [2024-12-03 00:56:53.121868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.121896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.130162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ef270 00:22:40.778 [2024-12-03 00:56:53.130976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.131004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.139475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e38d0 00:22:40.778 [2024-12-03 00:56:53.140156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.778 [2024-12-03 00:56:53.140196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:40.778 [2024-12-03 00:56:53.148739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e6b70 00:22:40.778 [2024-12-03 00:56:53.149405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.149452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.157971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f46d0 00:22:40.779 [2024-12-03 00:56:53.158683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.158723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.167358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ef270 00:22:40.779 [2024-12-03 00:56:53.167970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.168010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.176620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fc998 00:22:40.779 [2024-12-03 00:56:53.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.177278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.185855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ddc00 00:22:40.779 [2024-12-03 00:56:53.186460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.186503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.195149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e88f8 00:22:40.779 [2024-12-03 00:56:53.195703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.195733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.204347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fc998 00:22:40.779 [2024-12-03 00:56:53.205207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.205235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.213940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fbcf0 00:22:40.779 [2024-12-03 00:56:53.215338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.215367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.223375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fc998 00:22:40.779 [2024-12-03 00:56:53.224579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.224607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.232390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fc560 00:22:40.779 [2024-12-03 00:56:53.233458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.233486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.242482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fef90 00:22:40.779 [2024-12-03 00:56:53.243381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.243407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.250668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e6b70 00:22:40.779 [2024-12-03 00:56:53.251575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.251602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.259946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eaef0 00:22:40.779 [2024-12-03 00:56:53.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.260250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.269097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e23b8 00:22:40.779 [2024-12-03 00:56:53.270041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.278551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e99d8 00:22:40.779 [2024-12-03 00:56:53.278816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.278841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:40.779 [2024-12-03 00:56:53.288267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e5658 00:22:40.779 [2024-12-03 00:56:53.288698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.779 [2024-12-03 00:56:53.288735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.298359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f4f40 00:22:41.039 [2024-12-03 00:56:53.298844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.298874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.307614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f5be8 00:22:41.039 [2024-12-03 00:56:53.308000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.308029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.318487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eff18 00:22:41.039 [2024-12-03 00:56:53.319776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.319817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.327319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190edd58 00:22:41.039 [2024-12-03 00:56:53.328148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.328192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.338822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e23b8 00:22:41.039 [2024-12-03 00:56:53.339402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.339464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.349265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fda78 00:22:41.039 [2024-12-03 00:56:53.349840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.349879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.359231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ec408 00:22:41.039 [2024-12-03 00:56:53.359463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.359481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.368826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6458 00:22:41.039 [2024-12-03 00:56:53.369857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.369896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.378470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fac10 00:22:41.039 [2024-12-03 00:56:53.378770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.039 [2024-12-03 00:56:53.378800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:41.039 [2024-12-03 00:56:53.387757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e9e10 00:22:41.040 [2024-12-03 00:56:53.388000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.388030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.397011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3d08 00:22:41.040 [2024-12-03 00:56:53.397250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.397270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.406382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6890 00:22:41.040 [2024-12-03 00:56:53.406646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.406676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.415669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f9b30 00:22:41.040 [2024-12-03 00:56:53.415916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.415970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.426648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eff18 00:22:41.040 [2024-12-03 00:56:53.428138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.428167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.436193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ec408 00:22:41.040 [2024-12-03 00:56:53.437872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.437900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.445580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190df550 00:22:41.040 [2024-12-03 00:56:53.447004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.447034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.455036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6890 00:22:41.040 [2024-12-03 00:56:53.456385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.456421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.464458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f4f40 00:22:41.040 [2024-12-03 00:56:53.465315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.465343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.472977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f4f40 00:22:41.040 [2024-12-03 00:56:53.473609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.473659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.482348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190df550 00:22:41.040 [2024-12-03 00:56:53.482883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.482924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.491647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f1ca0 00:22:41.040 [2024-12-03 00:56:53.492172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.492202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.500893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fa3a0 00:22:41.040 [2024-12-03 00:56:53.501462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.501491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.510105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ec408 00:22:41.040 [2024-12-03 00:56:53.510693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.510726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.519525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f1868 00:22:41.040 [2024-12-03 00:56:53.520014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.520043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.529151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190df118 00:22:41.040 [2024-12-03 00:56:53.529703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.529732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.542464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f2948 00:22:41.040 [2024-12-03 00:56:53.543738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.040 [2024-12-03 00:56:53.543778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.040 [2024-12-03 00:56:53.552348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e12d8 00:22:41.300 [2024-12-03 00:56:53.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.561770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e99d8 00:22:41.300 [2024-12-03 00:56:53.562436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.562478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.571652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e99d8 00:22:41.300 [2024-12-03 00:56:53.572540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.572578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.582724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eff18 00:22:41.300 [2024-12-03 00:56:53.583438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.583491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.593430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e4140 00:22:41.300 [2024-12-03 00:56:53.594164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.594212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.603965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3d08 00:22:41.300 [2024-12-03 00:56:53.604685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.604725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.614391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ebb98 00:22:41.300 [2024-12-03 00:56:53.615143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.615171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.624278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6890 00:22:41.300 [2024-12-03 00:56:53.625171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.625212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.634519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190df550 00:22:41.300 [2024-12-03 00:56:53.635251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.635290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.644777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f0350 00:22:41.300 [2024-12-03 00:56:53.645872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.645910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.654481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e8088 00:22:41.300 [2024-12-03 00:56:53.654959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.300 [2024-12-03 00:56:53.654987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.300 [2024-12-03 00:56:53.663890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f3a28 00:22:41.300 [2024-12-03 00:56:53.664928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.664965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.673820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190de8a8 00:22:41.301 [2024-12-03 00:56:53.674275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.674304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.683478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e0ea0 00:22:41.301 [2024-12-03 00:56:53.684523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.684563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.693253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3d08 00:22:41.301 [2024-12-03 00:56:53.693685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.693716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.703023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f92c0 00:22:41.301 [2024-12-03 00:56:53.703947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.703976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.712818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f2510 00:22:41.301 [2024-12-03 00:56:53.714068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.714108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.722661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e1710 00:22:41.301 [2024-12-03 00:56:53.723810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.723851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.732219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fa7d8 00:22:41.301 [2024-12-03 00:56:53.732464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.741750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e3d08 00:22:41.301 [2024-12-03 00:56:53.742899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.742940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.753105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e73e0 00:22:41.301 [2024-12-03 00:56:53.754693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.754734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.763412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6890 00:22:41.301 [2024-12-03 00:56:53.764504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.764544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.772220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fef90 00:22:41.301 [2024-12-03 00:56:53.773140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.773179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.781966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f1430 00:22:41.301 [2024-12-03 00:56:53.782956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.782999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.792114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f35f0 00:22:41.301 [2024-12-03 00:56:53.792958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.792999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.801101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eff18 00:22:41.301 [2024-12-03 00:56:53.801906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.801936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.301 [2024-12-03 00:56:53.810298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f6458 00:22:41.301 [2024-12-03 00:56:53.810517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.301 [2024-12-03 00:56:53.810536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.821695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190dece0 00:22:41.561 [2024-12-03 00:56:53.822571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.822614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.832005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ed920 00:22:41.561 [2024-12-03 00:56:53.833015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.833043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.840425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f1868 00:22:41.561 [2024-12-03 00:56:53.841197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.841237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.851273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ea680 00:22:41.561 [2024-12-03 00:56:53.852778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.852828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.860617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190dece0 00:22:41.561 [2024-12-03 00:56:53.862123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.862152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.868822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f81e0 00:22:41.561 [2024-12-03 00:56:53.869464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.869502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.878016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f9b30 00:22:41.561 [2024-12-03 00:56:53.879175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.879206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.887712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eee38 00:22:41.561 [2024-12-03 00:56:53.888167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.888196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.896973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f3a28 00:22:41.561 [2024-12-03 00:56:53.898052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.898081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.906560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e1f80 00:22:41.561 [2024-12-03 00:56:53.907044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.907074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.916176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f35f0 00:22:41.561 [2024-12-03 00:56:53.917160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.917189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.924650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fe2e8 00:22:41.561 [2024-12-03 00:56:53.924932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.924967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.934783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f46d0 00:22:41.561 [2024-12-03 00:56:53.935909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.935939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.944321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f31b8 00:22:41.561 [2024-12-03 00:56:53.945141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.945169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.953761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7da8 00:22:41.561 [2024-12-03 00:56:53.954426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.954478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.963194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190fd640 00:22:41.561 [2024-12-03 00:56:53.963796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.963834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.972440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7538 00:22:41.561 [2024-12-03 00:56:53.972966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.972996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.981639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eea00 00:22:41.561 [2024-12-03 00:56:53.982135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.982164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.561 [2024-12-03 00:56:53.991088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e1b48 00:22:41.561 [2024-12-03 00:56:53.991577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.561 [2024-12-03 00:56:53.991607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.000210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7da8 00:22:41.562 [2024-12-03 00:56:54.000685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.000715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.009430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ee5c8 00:22:41.562 [2024-12-03 00:56:54.009850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.009879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.018653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f96f8 00:22:41.562 [2024-12-03 00:56:54.019091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.027765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190eee38 00:22:41.562 [2024-12-03 00:56:54.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.028521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.038599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190ea680 00:22:41.562 [2024-12-03 00:56:54.039194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.039219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.050884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f2510 00:22:41.562 [2024-12-03 00:56:54.051258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.051281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.061862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190f7970 00:22:41.562 [2024-12-03 00:56:54.062471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.062509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.562 [2024-12-03 00:56:54.071493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd60e0) with pdu=0x2000190e84c0 00:22:41.562 [2024-12-03 00:56:54.072703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.562 [2024-12-03 00:56:54.072745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.821 00:22:41.821 Latency(us) 00:22:41.821 [2024-12-03T00:56:54.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.821 [2024-12-03T00:56:54.336Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:41.821 nvme0n1 : 2.00 26272.36 102.63 0.00 0.00 4867.14 1861.82 16086.11 00:22:41.821 [2024-12-03T00:56:54.336Z] =================================================================================================================== 00:22:41.821 [2024-12-03T00:56:54.336Z] Total : 26272.36 102.63 0.00 0.00 4867.14 1861.82 16086.11 00:22:41.821 0 00:22:41.821 00:56:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:41.821 00:56:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:41.821 00:56:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:41.821 00:56:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:41.821 | .driver_specific 00:22:41.821 | .nvme_error 00:22:41.821 | .status_code 00:22:41.821 | .command_transient_transport_error' 00:22:42.080 00:56:54 -- host/digest.sh@71 -- # (( 206 > 0 )) 00:22:42.080 00:56:54 -- host/digest.sh@73 -- # killprocess 98019 00:22:42.080 00:56:54 -- common/autotest_common.sh@936 -- # '[' -z 98019 ']' 00:22:42.080 00:56:54 -- common/autotest_common.sh@940 -- # kill -0 98019 00:22:42.080 00:56:54 -- common/autotest_common.sh@941 -- # uname 00:22:42.080 00:56:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.080 00:56:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98019 00:22:42.080 00:56:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:42.080 00:56:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:42.080 killing process with pid 98019 00:22:42.080 00:56:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98019' 00:22:42.080 00:56:54 -- common/autotest_common.sh@955 -- # kill 98019 00:22:42.080 Received shutdown signal, test time was about 2.000000 seconds 00:22:42.080 00:22:42.080 Latency(us) 00:22:42.080 [2024-12-03T00:56:54.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.080 [2024-12-03T00:56:54.595Z] =================================================================================================================== 00:22:42.080 [2024-12-03T00:56:54.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.080 00:56:54 -- common/autotest_common.sh@960 -- # wait 98019 00:22:42.339 00:56:54 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:42.339 00:56:54 -- host/digest.sh@54 -- # local rw bs qd 00:22:42.339 00:56:54 -- host/digest.sh@56 -- # rw=randwrite 00:22:42.339 00:56:54 -- host/digest.sh@56 -- # bs=131072 00:22:42.339 00:56:54 -- host/digest.sh@56 -- # qd=16 00:22:42.339 00:56:54 -- host/digest.sh@58 -- # bperfpid=98105 00:22:42.339 00:56:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:42.339 00:56:54 -- host/digest.sh@60 -- # waitforlisten 98105 /var/tmp/bperf.sock 00:22:42.339 00:56:54 -- common/autotest_common.sh@829 -- # '[' -z 98105 ']' 00:22:42.339 00:56:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:42.339 00:56:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:42.339 00:56:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:42.339 00:56:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.339 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:22:42.339 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:42.339 Zero copy mechanism will not be used. 00:22:42.339 [2024-12-03 00:56:54.674552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:42.339 [2024-12-03 00:56:54.674648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98105 ] 00:22:42.339 [2024-12-03 00:56:54.807212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.597 [2024-12-03 00:56:54.868270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.162 00:56:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.162 00:56:55 -- common/autotest_common.sh@862 -- # return 0 00:22:43.162 00:56:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.162 00:56:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.420 00:56:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:43.420 00:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.420 00:56:55 -- common/autotest_common.sh@10 -- # set +x 00:22:43.420 00:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.420 00:56:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.420 00:56:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.679 nvme0n1 00:22:43.679 00:56:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:43.679 00:56:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.679 00:56:56 -- common/autotest_common.sh@10 -- # set +x 00:22:43.679 00:56:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.679 00:56:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:43.679 00:56:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.938 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.938 Zero copy mechanism will not be used. 00:22:43.939 Running I/O for 2 seconds... 00:22:43.939 [2024-12-03 00:56:56.264738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.265080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.265129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.269171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.269466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.269512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.273581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.273700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.273722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.277869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.278006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.278027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.282206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.282343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.282363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.286645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.286739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.286760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.291279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.291452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.291474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.295770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.295982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.296014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.300105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.300274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.300294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.304533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.304656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.304676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.308863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.308976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.308997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.313194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.313290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.313310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.317499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.317594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.317616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.321849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.321982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.322003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.326144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.326305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.326325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.330861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.331071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.331091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.335250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.335485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.335505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.339836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.340016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.340036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.344223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.344373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.344393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.348601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.348699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.348719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.352934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.353030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.353050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.357322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.357468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.361661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.361877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.361908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.366315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.366556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.366592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.370889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.371153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.371182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.939 [2024-12-03 00:56:56.375310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.939 [2024-12-03 00:56:56.375532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.939 [2024-12-03 00:56:56.375552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.379909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.380029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.380051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.384603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.384708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.384727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.389590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.389692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.389712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.394374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.394544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.394565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.399439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.399666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.399687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.404334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.404549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.404570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.409102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.409275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.409295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.413925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.414068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.414088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.418624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.418735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.418776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.423324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.423434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.423455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.427792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.427887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.427908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.432185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.432357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.432377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.436570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.436762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.436793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.441126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.441320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.441340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.445411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.445671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.445732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.940 [2024-12-03 00:56:56.450033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:43.940 [2024-12-03 00:56:56.450311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.940 [2024-12-03 00:56:56.450344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.454809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.454958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.454978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.459390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.459498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.459518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.463620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.463725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.463745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.468018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.468164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.468185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.472357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.472605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.472626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.476947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.477157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.477177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.481300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.481499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.481519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.485678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.485811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.485832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.490173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.490305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.490325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.494718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.494831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.494851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.499058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.499169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.499190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.503472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.503655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.200 [2024-12-03 00:56:56.503675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.200 [2024-12-03 00:56:56.507832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.200 [2024-12-03 00:56:56.508029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.512206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.512386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.512406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.516507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.516724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.516756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.520854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.521005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.521025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.525272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.525438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.529662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.529796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.529816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.534177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.534346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.534366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.538553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.538719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.538740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.542949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.543155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.543175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.547367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.547573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.547594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.551668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.551841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.551861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.555970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.556103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.556123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.560401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.560544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.560564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.564734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.564836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.564856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.569105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.569241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.569260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.573440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.573586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.573606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.577862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.578003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.578024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.582412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.582613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.582645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.586812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.587083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.587127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.591117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.591248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.591268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.595597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.595748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.595768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.599923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.600055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.600076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.604307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.604462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.604483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.608787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.608950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.608971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.613184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.613359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.613379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.617699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.617900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.617943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.622039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.622270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.201 [2024-12-03 00:56:56.622289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.201 [2024-12-03 00:56:56.626578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.201 [2024-12-03 00:56:56.626708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.626728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.631006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.631102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.631122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.635351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.635448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.635469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.639629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.639762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.639782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.643978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.644142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.644162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.648434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.648672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.648692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.652876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.653070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.653091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.657255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.657433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.657453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.661572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.661672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.661692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.666132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.666344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.666364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.670464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.670609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.670628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.674765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.674895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.674915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.679113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.679259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.679279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.683480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.683631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.683651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.687940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.688135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.688155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.692328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.692581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.692601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.696823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.696955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.696975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.701155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.701252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.701273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.705477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.705574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.705595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.202 [2024-12-03 00:56:56.709959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.202 [2024-12-03 00:56:56.710088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.202 [2024-12-03 00:56:56.710109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.462 [2024-12-03 00:56:56.714877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.462 [2024-12-03 00:56:56.715030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.462 [2024-12-03 00:56:56.715050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.462 [2024-12-03 00:56:56.719454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.462 [2024-12-03 00:56:56.719726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.719757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.724014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.724210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.724230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.728394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.728731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.728769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.732832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.733001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.733021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.737182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.737280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.737300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.741619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.741734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.741754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.745935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.746047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.746067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.750296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.750475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.750496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.754705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.754872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.754891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.759017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.759212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.759232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.763422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.763675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.763705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.767787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.767900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.767920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.772150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.772287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.772307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.776666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.776786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.776806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.781007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.781118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.781137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.785536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.785707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.785727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.789934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.790100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.790120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.794437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.794714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.798726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.798960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.798980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.803108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.803222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.803242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.807644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.463 [2024-12-03 00:56:56.807749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.463 [2024-12-03 00:56:56.807770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.463 [2024-12-03 00:56:56.811941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.812039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.812059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.816240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.816347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.816367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.820611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.820757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.820777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.825065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.825234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.825254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.829512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.829724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.829744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.833924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.834143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.834162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.838365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.838513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.838533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.842871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.842968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.842989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.847230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.847333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.847353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.851548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.851658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.851678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.855944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.856087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.856107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.860249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.860463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.860483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.864647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.864842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.864873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.868879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.869004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.869024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.873266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.873397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.873435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.877678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.877858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.877878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.882144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.882302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.882322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.886713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.886830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.886850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.891114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.891279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.891299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.895581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.464 [2024-12-03 00:56:56.895796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.464 [2024-12-03 00:56:56.895827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.464 [2024-12-03 00:56:56.900043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.900268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.900287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.904468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.904625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.904644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.908771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.908867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.908887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.913134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.913313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.913333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.917602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.917709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.917730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.921968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.922116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.922136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.926562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.926766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.926786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.930900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.931095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.931115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.935324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.935540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.935560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.939699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.939829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.939849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.944074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.944177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.944197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.948496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.948645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.948665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.952798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.952919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.952938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.957225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.957336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.957355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.961657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.961845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.961865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.966080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.966343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.966362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.465 [2024-12-03 00:56:56.970687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.465 [2024-12-03 00:56:56.970875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.465 [2024-12-03 00:56:56.970894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:56.975668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:56.975797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:56.975816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:56.980048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:56.980151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:56.980171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:56.984744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:56.984926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:56.984946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:56.989164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:56.989267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:56.989288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:56.993532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:56.993658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:56.993678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:56.997958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:56.998124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:56.998144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.002291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.002546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.006685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.006901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.006932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.010985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.011180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.011200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.015420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.015562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.015582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.020013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.020177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.020197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.024385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.024497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.024518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.028734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.028838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.028857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.033167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.033337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.033357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.037637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.037862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.037893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.042049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.042223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.042243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.046400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.046543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.046562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.050729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.050827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.050847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.055043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.055225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.726 [2024-12-03 00:56:57.055245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.726 [2024-12-03 00:56:57.059444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.726 [2024-12-03 00:56:57.059550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.059570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.063944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.064078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.064097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.068452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.068623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.068643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.072724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.072938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.072969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.077260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.077459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.077479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.081642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.081778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.081798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.086080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.086207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.086238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.090495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.090764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.090784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.094889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.095018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.095038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.099249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.099361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.099382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.103560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.103728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.103747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.107931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.108150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.108170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.112381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.112600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.112621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.116784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.116905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.116925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.121132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.121229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.121249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.125595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.125764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.129963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.130092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.130111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.134561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.134705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.134724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.139056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.139226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.139246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.143444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.143683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.143720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.147864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.148046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.148066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.152166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.152299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.727 [2024-12-03 00:56:57.152319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.727 [2024-12-03 00:56:57.156526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.727 [2024-12-03 00:56:57.156641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.156661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.160904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.161049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.161069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.165359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.165501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.165521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.169698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.169802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.169833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.174474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.174669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.174690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.179236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.179443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.179475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.184126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.184301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.184322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.188970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.189108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.189128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.193618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.193739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.193772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.198400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.198613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.198633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.203180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.203295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.203315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.207769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.207965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.212372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.212602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.212623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.216926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.217156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.217175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.221518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.221750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.221770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.225799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.225935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.225955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.230234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.230340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.230360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.728 [2024-12-03 00:56:57.234789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.728 [2024-12-03 00:56:57.234982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.728 [2024-12-03 00:56:57.235002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.239769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.239884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.989 [2024-12-03 00:56:57.239904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.244323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.244485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.989 [2024-12-03 00:56:57.244505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.249070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.249253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.989 [2024-12-03 00:56:57.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.253505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.253711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.989 [2024-12-03 00:56:57.253730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.258033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.258229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.989 [2024-12-03 00:56:57.258249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.262490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.262677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.989 [2024-12-03 00:56:57.262697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.989 [2024-12-03 00:56:57.266978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.989 [2024-12-03 00:56:57.267129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.267149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.271528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.271696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.271716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.276053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.276175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.276194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.280489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.280593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.280612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.285097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.285281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.285301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.289555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.289769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.289788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.293968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.294199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.294230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.298416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.298608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.298638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.302849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.302998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.303017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.307400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.307651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.307675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.311753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.311914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.311934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.316151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.316276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.316296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.320720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.320904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.320924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.325074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.325298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.325317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.329665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.329886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.329906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.334039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.334183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.334213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.338537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.338700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.338720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.342879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.343060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.347415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.347557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.347577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.351840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.351988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.352008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.356268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.356467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.990 [2024-12-03 00:56:57.356487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.990 [2024-12-03 00:56:57.360708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.990 [2024-12-03 00:56:57.360924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.360955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.365121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.365326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.365346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.369753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.369896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.369915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.374089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.374277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.374297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.378724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.378922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.378942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.383094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.383225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.383245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.387551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.387699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.387720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.392147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.392330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.392350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.396660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.396913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.396972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.401182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.401403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.401459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.405795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.405956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.405975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.410694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.410829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.410849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.415387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.415570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.415591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.420210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.420360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.420380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.424947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.425091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.425111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.429760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.429949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.429969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.434347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.434578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.434598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.439093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.439283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.439302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.443830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.443964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.443984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.448123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.448250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.448270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.452622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.452770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.457040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.991 [2024-12-03 00:56:57.457175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.991 [2024-12-03 00:56:57.457195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.991 [2024-12-03 00:56:57.461456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.461586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.461606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.465868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.466039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.466059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.470207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.470435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.470467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.474670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.474875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.474906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.479036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.479170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.479190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.483292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.483392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.483426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.487455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.487602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.487622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.491770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.491952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.491972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.496083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.496211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.496232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.992 [2024-12-03 00:56:57.500910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:44.992 [2024-12-03 00:56:57.501081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.992 [2024-12-03 00:56:57.501101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.505335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.505555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.505575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.509983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.510209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.510229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.514374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.514494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.514524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.518802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.518933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.518954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.523252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.523403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.523437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.527660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.527763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.527783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.532078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.532191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.532211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.536567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.536755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.253 [2024-12-03 00:56:57.536775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.253 [2024-12-03 00:56:57.540942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.253 [2024-12-03 00:56:57.541162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.541182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.545314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.545441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.545461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.549890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.550016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.550036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.554144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.554273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.554293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.558587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.558757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.558777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.562932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.563032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.563052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.567223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.567330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.567349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.571606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.571795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.571815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.575887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.576101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.576120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.580259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.580457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.580477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.584648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.584784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.584804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.588974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.589096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.589116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.593383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.593586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.593607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.597700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.597829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.597848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.602071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.602198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.602230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.606499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.606734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.606754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.610823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.611000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.611020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.615275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.615521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.615544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.619577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.619794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.623871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.623966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.623986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.628193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.628343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.628362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.632503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.632651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.636859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.636990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.637009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.641247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.641429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.641449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.645763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.646041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.646070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.650289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.650540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.650561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.654823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.654975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.254 [2024-12-03 00:56:57.654995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.254 [2024-12-03 00:56:57.659184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.254 [2024-12-03 00:56:57.659332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.659352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.663661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.663808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.663827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.667940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.668070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.668090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.672361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.672470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.672492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.676776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.676943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.676963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.681046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.681279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.681310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.685606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.685791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.685810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.689972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.690109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.690129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.694395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.694528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.694547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.698817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.698977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.698997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.703205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.703324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.703344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.707648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.707780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.707800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.712118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.712301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.716528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.716723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.716753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.720869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.721081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.725214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.725337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.725358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.729593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.729712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.729731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.734018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.734208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.734238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.738564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.738680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.738700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.742999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.743113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.743133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.747586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.747772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.747792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.752048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.752298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.756633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.756787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.756806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.255 [2024-12-03 00:56:57.760983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.255 [2024-12-03 00:56:57.761133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.255 [2024-12-03 00:56:57.761153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.765937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.766106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.766126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.770473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.770686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.770706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.775235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.775333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.775354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.779521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.779618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.779638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.783928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.784097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.784117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.788239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.788445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.788465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.792744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.792938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.792969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.797080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.797264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.797283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.801353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.801516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.801535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.805786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.805953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.805973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.810154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.810289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.810311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.814686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.814832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.814852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.819187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.516 [2024-12-03 00:56:57.819383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.516 [2024-12-03 00:56:57.819404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.516 [2024-12-03 00:56:57.823687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.823951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.823971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.828136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.828353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.828373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.832576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.832713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.832734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.836916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.837101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.837121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.841366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.841554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.841575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.845912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.846026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.846047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.850128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.850306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.850326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.854722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.854890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.854910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.858942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.859156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.859175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.863324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.863551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.863571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.867740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.867877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.867896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.872105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.872223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.872242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.876554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.876733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.876752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.880838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.880935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.880956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.885209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.885323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.885345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.889670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.889839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.889859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.894302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.894481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.894502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.898882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.899074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.899095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.903233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.903364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.903384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.907760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.907884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.907905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.912126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.912270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.912290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.916600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.916712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.916732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.920995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.921145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.921165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.925571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.925773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.925793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.929870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.930062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.930081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.934580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.934784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.934805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.939009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.517 [2024-12-03 00:56:57.939152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.517 [2024-12-03 00:56:57.939171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.517 [2024-12-03 00:56:57.943417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.943582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.943602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.947908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.948069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.948089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.952362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.952513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.952534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.956749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.956882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.956902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.961044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.961213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.961232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.965393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.965624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.965645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.969779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.969983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.970003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.974096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.974285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.974305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.978530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.978654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.978674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.982992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.983154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.983174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.987262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.987422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.987443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.991573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.991704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.991723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:57.995917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:57.996089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:57.996109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.000179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.000436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.000468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.004499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.004711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.004743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.008766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.008863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.008883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.012968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.013066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.013086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.017252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.017407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.017437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.021616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.021732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.021752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.518 [2024-12-03 00:56:58.026220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.518 [2024-12-03 00:56:58.026312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.518 [2024-12-03 00:56:58.026332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.031087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.031255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.031275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.035885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.036164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.036186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.040251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.040485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.040506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.044762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.044922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.044953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.049059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.049192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.049212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.053392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.053568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.053588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.057820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.057951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.057971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.062299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.062431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.062452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.066711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.066904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.066935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.070994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.071226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.071260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.075361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.075527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.075548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.079585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.079723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.079744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.083803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.083951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.083971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.778 [2024-12-03 00:56:58.088110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.778 [2024-12-03 00:56:58.088281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.778 [2024-12-03 00:56:58.088301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.092513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.092646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.092666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.096750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.096881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.096901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.101037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.101207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.101226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.105398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.105651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.105688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.109790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.110002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.110022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.114289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.114440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.114462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.118776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.118922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.118942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.123050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.123195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.123215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.127527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.127647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.127667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.131851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.131983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.132003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.136278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.136457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.136477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.140604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.140817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.140836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.144874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.145069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.145088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.149215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.149355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.149375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.153581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.153694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.153714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.157924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.158074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.162242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.162376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.162396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.166625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.166779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.166799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.171076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.171242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.171262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.175494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.175725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.175744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.179762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.179980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.180000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.184216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.184357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.184377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.188585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.188715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.188735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.192900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.193073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.193093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.197304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.197488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.197508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.201602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.201699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.201720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.779 [2024-12-03 00:56:58.205990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.779 [2024-12-03 00:56:58.206157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.779 [2024-12-03 00:56:58.206185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.210326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.210518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.210549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.214670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.214810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.214830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.219010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.219155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.219176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.223252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.223356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.223376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.227689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.227836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.227856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.231958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.232108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.232127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.236431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.236528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.236549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.240751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.240919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.240939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.244974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.245198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.245224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.780 [2024-12-03 00:56:58.249353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd6280) with pdu=0x2000190fef90 00:22:45.780 [2024-12-03 00:56:58.249583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.780 [2024-12-03 00:56:58.249604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.780 00:22:45.780 Latency(us) 00:22:45.780 [2024-12-03T00:56:58.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.780 [2024-12-03T00:56:58.295Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:45.780 nvme0n1 : 2.00 6949.76 868.72 0.00 0.00 2297.87 1668.19 12213.53 00:22:45.780 [2024-12-03T00:56:58.295Z] =================================================================================================================== 00:22:45.780 [2024-12-03T00:56:58.295Z] Total : 6949.76 868.72 0.00 0.00 2297.87 1668.19 12213.53 00:22:45.780 0 00:22:45.780 00:56:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:45.780 00:56:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:45.780 00:56:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:45.780 | .driver_specific 00:22:45.780 | .nvme_error 00:22:45.780 | .status_code 00:22:45.780 | .command_transient_transport_error' 00:22:45.780 00:56:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:46.038 00:56:58 -- host/digest.sh@71 -- # (( 448 > 0 )) 00:22:46.039 00:56:58 -- host/digest.sh@73 -- # killprocess 98105 00:22:46.039 00:56:58 -- common/autotest_common.sh@936 -- # '[' -z 98105 ']' 00:22:46.039 00:56:58 -- common/autotest_common.sh@940 -- # kill -0 98105 00:22:46.039 00:56:58 -- common/autotest_common.sh@941 -- # uname 00:22:46.039 00:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.039 00:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98105 00:22:46.297 00:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:46.297 killing process with pid 98105 00:22:46.297 00:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:46.297 00:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98105' 00:22:46.297 Received shutdown signal, test time was about 2.000000 seconds 00:22:46.297 00:22:46.297 Latency(us) 00:22:46.297 [2024-12-03T00:56:58.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.297 [2024-12-03T00:56:58.812Z] =================================================================================================================== 00:22:46.297 [2024-12-03T00:56:58.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.297 00:56:58 -- common/autotest_common.sh@955 -- # kill 98105 00:22:46.297 00:56:58 -- common/autotest_common.sh@960 -- # wait 98105 00:22:46.556 00:56:58 -- host/digest.sh@115 -- # killprocess 97794 00:22:46.556 00:56:58 -- common/autotest_common.sh@936 -- # '[' -z 97794 ']' 00:22:46.556 00:56:58 -- common/autotest_common.sh@940 -- # kill -0 97794 00:22:46.556 00:56:58 -- common/autotest_common.sh@941 -- # uname 00:22:46.556 00:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.556 00:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97794 00:22:46.556 00:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:46.556 00:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:46.556 killing process with pid 97794 00:22:46.556 00:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97794' 00:22:46.556 00:56:58 -- common/autotest_common.sh@955 -- # kill 97794 00:22:46.556 00:56:58 -- common/autotest_common.sh@960 -- # wait 97794 00:22:46.556 00:22:46.556 real 0m18.172s 00:22:46.556 user 0m33.267s 00:22:46.556 sys 0m5.428s 00:22:46.556 00:56:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:46.556 00:56:59 -- common/autotest_common.sh@10 -- # set +x 00:22:46.556 ************************************ 00:22:46.556 END TEST nvmf_digest_error 00:22:46.556 ************************************ 00:22:46.815 00:56:59 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:46.815 00:56:59 -- host/digest.sh@139 -- # nvmftestfini 00:22:46.815 00:56:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:46.815 00:56:59 -- nvmf/common.sh@116 -- # sync 00:22:46.815 00:56:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:46.815 00:56:59 -- nvmf/common.sh@119 -- # set +e 00:22:46.815 00:56:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:46.815 00:56:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:46.815 rmmod nvme_tcp 00:22:46.815 rmmod nvme_fabrics 00:22:46.815 rmmod nvme_keyring 00:22:46.815 00:56:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:46.815 00:56:59 -- nvmf/common.sh@123 -- # set -e 00:22:46.815 00:56:59 -- nvmf/common.sh@124 -- # return 0 00:22:46.815 00:56:59 -- nvmf/common.sh@477 -- # '[' -n 97794 ']' 00:22:46.815 00:56:59 -- nvmf/common.sh@478 -- # killprocess 97794 00:22:46.815 00:56:59 -- common/autotest_common.sh@936 -- # '[' -z 97794 ']' 00:22:46.815 00:56:59 -- common/autotest_common.sh@940 -- # kill -0 97794 00:22:46.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97794) - No such process 00:22:46.815 Process with pid 97794 is not found 00:22:46.815 00:56:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97794 is not found' 00:22:46.815 00:56:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:46.815 00:56:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:46.815 00:56:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:46.815 00:56:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.815 00:56:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:46.815 00:56:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.815 00:56:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.815 00:56:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.815 00:56:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:46.815 00:22:46.815 real 0m35.552s 00:22:46.815 user 1m3.172s 00:22:46.815 sys 0m11.156s 00:22:46.815 00:56:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:46.815 00:56:59 -- common/autotest_common.sh@10 -- # set +x 00:22:46.815 ************************************ 00:22:46.815 END TEST nvmf_digest 00:22:46.815 ************************************ 00:22:46.815 00:56:59 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:46.815 00:56:59 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:46.815 00:56:59 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:46.815 00:56:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:46.815 00:56:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:46.815 00:56:59 -- common/autotest_common.sh@10 -- # set +x 00:22:46.815 ************************************ 00:22:46.815 START TEST nvmf_mdns_discovery 00:22:46.815 ************************************ 00:22:46.815 00:56:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:47.074 * Looking for test storage... 00:22:47.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:47.074 00:56:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:47.074 00:56:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:47.074 00:56:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:47.074 00:56:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:47.074 00:56:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:47.074 00:56:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:47.074 00:56:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:47.074 00:56:59 -- scripts/common.sh@335 -- # IFS=.-: 00:22:47.074 00:56:59 -- scripts/common.sh@335 -- # read -ra ver1 00:22:47.074 00:56:59 -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.074 00:56:59 -- scripts/common.sh@336 -- # read -ra ver2 00:22:47.074 00:56:59 -- scripts/common.sh@337 -- # local 'op=<' 00:22:47.074 00:56:59 -- scripts/common.sh@339 -- # ver1_l=2 00:22:47.074 00:56:59 -- scripts/common.sh@340 -- # ver2_l=1 00:22:47.074 00:56:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:47.074 00:56:59 -- scripts/common.sh@343 -- # case "$op" in 00:22:47.074 00:56:59 -- scripts/common.sh@344 -- # : 1 00:22:47.074 00:56:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:47.074 00:56:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.074 00:56:59 -- scripts/common.sh@364 -- # decimal 1 00:22:47.074 00:56:59 -- scripts/common.sh@352 -- # local d=1 00:22:47.074 00:56:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.074 00:56:59 -- scripts/common.sh@354 -- # echo 1 00:22:47.074 00:56:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:47.074 00:56:59 -- scripts/common.sh@365 -- # decimal 2 00:22:47.074 00:56:59 -- scripts/common.sh@352 -- # local d=2 00:22:47.074 00:56:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.074 00:56:59 -- scripts/common.sh@354 -- # echo 2 00:22:47.074 00:56:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:47.074 00:56:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:47.074 00:56:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:47.074 00:56:59 -- scripts/common.sh@367 -- # return 0 00:22:47.074 00:56:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.074 00:56:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:47.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.074 --rc genhtml_branch_coverage=1 00:22:47.074 --rc genhtml_function_coverage=1 00:22:47.074 --rc genhtml_legend=1 00:22:47.074 --rc geninfo_all_blocks=1 00:22:47.074 --rc geninfo_unexecuted_blocks=1 00:22:47.074 00:22:47.074 ' 00:22:47.074 00:56:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:47.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.074 --rc genhtml_branch_coverage=1 00:22:47.074 --rc genhtml_function_coverage=1 00:22:47.074 --rc genhtml_legend=1 00:22:47.074 --rc geninfo_all_blocks=1 00:22:47.074 --rc geninfo_unexecuted_blocks=1 00:22:47.074 00:22:47.074 ' 00:22:47.074 00:56:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:47.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.074 --rc genhtml_branch_coverage=1 00:22:47.074 --rc genhtml_function_coverage=1 00:22:47.074 --rc genhtml_legend=1 00:22:47.075 --rc geninfo_all_blocks=1 00:22:47.075 --rc geninfo_unexecuted_blocks=1 00:22:47.075 00:22:47.075 ' 00:22:47.075 00:56:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:47.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.075 --rc genhtml_branch_coverage=1 00:22:47.075 --rc genhtml_function_coverage=1 00:22:47.075 --rc genhtml_legend=1 00:22:47.075 --rc geninfo_all_blocks=1 00:22:47.075 --rc geninfo_unexecuted_blocks=1 00:22:47.075 00:22:47.075 ' 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:47.075 00:56:59 -- nvmf/common.sh@7 -- # uname -s 00:22:47.075 00:56:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.075 00:56:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.075 00:56:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.075 00:56:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.075 00:56:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.075 00:56:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.075 00:56:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.075 00:56:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.075 00:56:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.075 00:56:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.075 00:56:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:22:47.075 00:56:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:22:47.075 00:56:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.075 00:56:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.075 00:56:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:47.075 00:56:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.075 00:56:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.075 00:56:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.075 00:56:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.075 00:56:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.075 00:56:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.075 00:56:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.075 00:56:59 -- paths/export.sh@5 -- # export PATH 00:22:47.075 00:56:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.075 00:56:59 -- nvmf/common.sh@46 -- # : 0 00:22:47.075 00:56:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.075 00:56:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.075 00:56:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.075 00:56:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.075 00:56:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.075 00:56:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.075 00:56:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.075 00:56:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:47.075 00:56:59 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:47.075 00:56:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.075 00:56:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.075 00:56:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.075 00:56:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.075 00:56:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.075 00:56:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.075 00:56:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.075 00:56:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.075 00:56:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:47.075 00:56:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:47.075 00:56:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:47.075 00:56:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:47.075 00:56:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:47.075 00:56:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:47.075 00:56:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.075 00:56:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.075 00:56:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:47.075 00:56:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:47.075 00:56:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:47.075 00:56:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:47.075 00:56:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:47.075 00:56:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.075 00:56:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:47.075 00:56:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:47.075 00:56:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:47.075 00:56:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:47.075 00:56:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:47.075 00:56:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:47.075 Cannot find device "nvmf_tgt_br" 00:22:47.075 00:56:59 -- nvmf/common.sh@154 -- # true 00:22:47.075 00:56:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:47.075 Cannot find device "nvmf_tgt_br2" 00:22:47.075 00:56:59 -- nvmf/common.sh@155 -- # true 00:22:47.075 00:56:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:47.075 00:56:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:47.075 Cannot find device "nvmf_tgt_br" 00:22:47.075 00:56:59 -- nvmf/common.sh@157 -- # true 00:22:47.075 00:56:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:47.075 Cannot find device "nvmf_tgt_br2" 00:22:47.075 00:56:59 -- nvmf/common.sh@158 -- # true 00:22:47.075 00:56:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:47.335 00:56:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:47.335 00:56:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:47.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.335 00:56:59 -- nvmf/common.sh@161 -- # true 00:22:47.335 00:56:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:47.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.335 00:56:59 -- nvmf/common.sh@162 -- # true 00:22:47.335 00:56:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:47.335 00:56:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:47.335 00:56:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:47.335 00:56:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:47.335 00:56:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:47.335 00:56:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:47.335 00:56:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:47.335 00:56:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:47.335 00:56:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:47.335 00:56:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:47.335 00:56:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:47.335 00:56:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:47.335 00:56:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:47.335 00:56:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:47.335 00:56:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:47.335 00:56:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:47.335 00:56:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:47.335 00:56:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:47.335 00:56:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:47.335 00:56:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:47.335 00:56:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:47.335 00:56:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:47.335 00:56:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:47.335 00:56:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:47.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:22:47.335 00:22:47.335 --- 10.0.0.2 ping statistics --- 00:22:47.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.335 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:22:47.335 00:56:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:47.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:47.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:47.335 00:22:47.335 --- 10.0.0.3 ping statistics --- 00:22:47.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.335 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:47.335 00:56:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:47.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:47.335 00:22:47.335 --- 10.0.0.1 ping statistics --- 00:22:47.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.335 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:47.335 00:56:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.335 00:56:59 -- nvmf/common.sh@421 -- # return 0 00:22:47.335 00:56:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:47.335 00:56:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.335 00:56:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:47.335 00:56:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:47.335 00:56:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.335 00:56:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:47.335 00:56:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:47.335 00:56:59 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:47.335 00:56:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:47.335 00:56:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.335 00:56:59 -- common/autotest_common.sh@10 -- # set +x 00:22:47.335 00:56:59 -- nvmf/common.sh@469 -- # nvmfpid=98407 00:22:47.335 00:56:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:47.335 00:56:59 -- nvmf/common.sh@470 -- # waitforlisten 98407 00:22:47.335 00:56:59 -- common/autotest_common.sh@829 -- # '[' -z 98407 ']' 00:22:47.335 00:56:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.335 00:56:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.335 00:56:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.335 00:56:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.335 00:56:59 -- common/autotest_common.sh@10 -- # set +x 00:22:47.594 [2024-12-03 00:56:59.895066] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:47.594 [2024-12-03 00:56:59.895156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.594 [2024-12-03 00:57:00.034801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.853 [2024-12-03 00:57:00.112300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.853 [2024-12-03 00:57:00.112491] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.853 [2024-12-03 00:57:00.112505] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.853 [2024-12-03 00:57:00.112514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.853 [2024-12-03 00:57:00.112548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.853 00:57:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.853 00:57:00 -- common/autotest_common.sh@862 -- # return 0 00:22:47.853 00:57:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:47.853 00:57:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 00:57:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 [2024-12-03 00:57:00.323486] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 [2024-12-03 00:57:00.331641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 null0 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 null1 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 null2 00:22:47.853 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.853 00:57:00 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:47.853 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.853 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:48.112 null3 00:22:48.112 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.112 00:57:00 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:48.112 00:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.112 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:48.112 00:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.112 00:57:00 -- host/mdns_discovery.sh@47 -- # hostpid=98439 00:22:48.112 00:57:00 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:48.112 00:57:00 -- host/mdns_discovery.sh@48 -- # waitforlisten 98439 /tmp/host.sock 00:22:48.112 00:57:00 -- common/autotest_common.sh@829 -- # '[' -z 98439 ']' 00:22:48.112 00:57:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:48.112 00:57:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.112 00:57:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:48.112 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:48.112 00:57:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.112 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:22:48.112 [2024-12-03 00:57:00.433127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.112 [2024-12-03 00:57:00.433218] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98439 ] 00:22:48.112 [2024-12-03 00:57:00.574821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.371 [2024-12-03 00:57:00.643248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.371 [2024-12-03 00:57:00.643470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.306 00:57:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.306 00:57:01 -- common/autotest_common.sh@862 -- # return 0 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@57 -- # avahipid=98475 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:49.306 00:57:01 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:49.306 Process 1062 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:49.306 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:49.306 Successfully dropped root privileges. 00:22:49.306 avahi-daemon 0.8 starting up. 00:22:49.306 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:49.306 Successfully called chroot(). 00:22:49.306 Successfully dropped remaining capabilities. 00:22:49.306 No service file found in /etc/avahi/services. 00:22:50.267 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:50.267 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:50.267 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:50.267 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:50.267 Network interface enumeration completed. 00:22:50.267 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:22:50.267 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:50.267 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:22:50.267 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:50.267 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3034942413. 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:50.267 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:50.267 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.267 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.267 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.267 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.267 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.267 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.267 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.267 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.267 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 00:57:02 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.268 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.268 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.268 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.268 00:57:02 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.268 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:50.526 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.526 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.526 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.526 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.526 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.526 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.526 00:57:02 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.527 [2024-12-03 00:57:02.883036] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 [2024-12-03 00:57:02.936086] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 [2024-12-03 00:57:02.976048] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:50.527 00:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.527 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:22:50.527 [2024-12-03 00:57:02.984045] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.527 00:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98526 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:50.527 00:57:02 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:51.462 [2024-12-03 00:57:03.783037] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:51.462 Established under name 'CDC' 00:22:51.721 [2024-12-03 00:57:04.183047] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:51.721 [2024-12-03 00:57:04.183072] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:22:51.721 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:51.721 cookie is 0 00:22:51.721 is_local: 1 00:22:51.721 our_own: 0 00:22:51.721 wide_area: 0 00:22:51.721 multicast: 1 00:22:51.721 cached: 1 00:22:51.979 [2024-12-03 00:57:04.283042] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:51.979 [2024-12-03 00:57:04.283064] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:22:51.979 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:51.979 cookie is 0 00:22:51.979 is_local: 1 00:22:51.979 our_own: 0 00:22:51.979 wide_area: 0 00:22:51.979 multicast: 1 00:22:51.979 cached: 1 00:22:52.914 [2024-12-03 00:57:05.187783] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:52.914 [2024-12-03 00:57:05.187811] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:52.914 [2024-12-03 00:57:05.187828] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:52.914 [2024-12-03 00:57:05.273885] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:52.914 [2024-12-03 00:57:05.287477] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.914 [2024-12-03 00:57:05.287497] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.914 [2024-12-03 00:57:05.287511] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.914 [2024-12-03 00:57:05.331938] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:52.914 [2024-12-03 00:57:05.331965] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:52.914 [2024-12-03 00:57:05.375161] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:53.172 [2024-12-03 00:57:05.437653] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:53.172 [2024-12-03 00:57:05.437680] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.705 00:57:07 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:55.705 00:57:07 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:55.705 00:57:07 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:55.705 00:57:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.705 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:22:55.705 00:57:07 -- host/mdns_discovery.sh@80 -- # sort 00:22:55.705 00:57:07 -- host/mdns_discovery.sh@80 -- # xargs 00:22:55.706 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:55.706 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.706 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@76 -- # sort 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@76 -- # xargs 00:22:55.706 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.706 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.706 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@68 -- # xargs 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@68 -- # sort 00:22:55.706 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@64 -- # sort 00:22:55.706 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.706 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@64 -- # xargs 00:22:55.706 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:55.706 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:55.706 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.706 00:57:08 -- host/mdns_discovery.sh@72 -- # xargs 00:22:55.964 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.964 00:57:08 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:55.965 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.965 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@72 -- # xargs 00:22:55.965 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:55.965 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.965 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.965 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:55.965 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.965 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.965 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:55.965 00:57:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.965 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.965 00:57:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.965 00:57:08 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:56.900 00:57:09 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:56.900 00:57:09 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.900 00:57:09 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:56.900 00:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.900 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:22:56.900 00:57:09 -- host/mdns_discovery.sh@64 -- # sort 00:22:56.900 00:57:09 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.159 00:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:57.159 00:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.159 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:57.159 00:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:57.159 00:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.159 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:22:57.159 [2024-12-03 00:57:09.487450] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.159 [2024-12-03 00:57:09.488297] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:57.159 [2024-12-03 00:57:09.488345] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.159 [2024-12-03 00:57:09.488375] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:57.159 [2024-12-03 00:57:09.488387] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.159 00:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.159 00:57:09 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:57.159 00:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.159 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:22:57.159 [2024-12-03 00:57:09.495429] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:57.159 [2024-12-03 00:57:09.496308] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:57.159 [2024-12-03 00:57:09.496362] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:57.159 00:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.160 00:57:09 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:57.160 [2024-12-03 00:57:09.628397] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:57.160 [2024-12-03 00:57:09.628548] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:57.419 [2024-12-03 00:57:09.690637] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:57.419 [2024-12-03 00:57:09.690662] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.419 [2024-12-03 00:57:09.690684] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.419 [2024-12-03 00:57:09.690699] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.419 [2024-12-03 00:57:09.690762] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:57.419 [2024-12-03 00:57:09.690770] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.419 [2024-12-03 00:57:09.690775] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:57.419 [2024-12-03 00:57:09.690786] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.419 [2024-12-03 00:57:09.736503] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.419 [2024-12-03 00:57:09.736523] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.419 [2024-12-03 00:57:09.736558] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.419 [2024-12-03 00:57:09.736566] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:58.358 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@68 -- # sort 00:22:58.358 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@68 -- # xargs 00:22:58.358 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.358 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:58.358 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@64 -- # sort 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@64 -- # xargs 00:22:58.358 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:58.358 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.358 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.358 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.358 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.358 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.358 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:58.358 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:58.358 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.358 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.358 00:57:10 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:58.359 00:57:10 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:58.359 00:57:10 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:58.359 00:57:10 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:58.359 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.359 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.359 [2024-12-03 00:57:10.796384] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.359 [2024-12-03 00:57:10.796458] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.359 [2024-12-03 00:57:10.796493] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.359 [2024-12-03 00:57:10.796506] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.359 [2024-12-03 00:57:10.797246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.797279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.797307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.797330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.797339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.797347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.797356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.797364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.797372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.359 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.359 00:57:10 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:58.359 00:57:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.359 00:57:10 -- common/autotest_common.sh@10 -- # set +x 00:22:58.359 [2024-12-03 00:57:10.804478] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.359 [2024-12-03 00:57:10.804551] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.359 [2024-12-03 00:57:10.807196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.359 00:57:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.359 00:57:10 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:58.359 [2024-12-03 00:57:10.810226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.810274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.810287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.810297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.810306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.810315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.810324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.359 [2024-12-03 00:57:10.810333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.359 [2024-12-03 00:57:10.810341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.817215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.359 [2024-12-03 00:57:10.817351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.817396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.817411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.359 [2024-12-03 00:57:10.817438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.817483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.817547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.359 [2024-12-03 00:57:10.817558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.359 [2024-12-03 00:57:10.817568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.359 [2024-12-03 00:57:10.817584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.359 [2024-12-03 00:57:10.820167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.827299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.359 [2024-12-03 00:57:10.827389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.827460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.827478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.359 [2024-12-03 00:57:10.827487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.827502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.827514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.359 [2024-12-03 00:57:10.827521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.359 [2024-12-03 00:57:10.827528] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.359 [2024-12-03 00:57:10.827558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.359 [2024-12-03 00:57:10.830175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.359 [2024-12-03 00:57:10.830306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.830351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.830366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.359 [2024-12-03 00:57:10.830375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.830389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.830401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.359 [2024-12-03 00:57:10.830409] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.359 [2024-12-03 00:57:10.830416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.359 [2024-12-03 00:57:10.830444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.359 [2024-12-03 00:57:10.837343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.359 [2024-12-03 00:57:10.837458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.837504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.837520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.359 [2024-12-03 00:57:10.837529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.837543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.837570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.359 [2024-12-03 00:57:10.837579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.359 [2024-12-03 00:57:10.837586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.359 [2024-12-03 00:57:10.837599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.359 [2024-12-03 00:57:10.840276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.359 [2024-12-03 00:57:10.840363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.840404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.840419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.359 [2024-12-03 00:57:10.840438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.840453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.840465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.359 [2024-12-03 00:57:10.840472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.359 [2024-12-03 00:57:10.840479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.359 [2024-12-03 00:57:10.840492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.359 [2024-12-03 00:57:10.847388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.359 [2024-12-03 00:57:10.847482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.847524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.359 [2024-12-03 00:57:10.847539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.359 [2024-12-03 00:57:10.847547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.359 [2024-12-03 00:57:10.847560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.359 [2024-12-03 00:57:10.847572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.360 [2024-12-03 00:57:10.847578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.360 [2024-12-03 00:57:10.847586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.360 [2024-12-03 00:57:10.847598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.360 [2024-12-03 00:57:10.850324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.360 [2024-12-03 00:57:10.850422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.850483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.850499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.360 [2024-12-03 00:57:10.850523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.360 [2024-12-03 00:57:10.850537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.360 [2024-12-03 00:57:10.850548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.360 [2024-12-03 00:57:10.850556] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.360 [2024-12-03 00:57:10.850563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.360 [2024-12-03 00:57:10.850591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.360 [2024-12-03 00:57:10.857457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.360 [2024-12-03 00:57:10.857570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.857613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.857628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.360 [2024-12-03 00:57:10.857637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.360 [2024-12-03 00:57:10.857651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.360 [2024-12-03 00:57:10.857678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.360 [2024-12-03 00:57:10.857687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.360 [2024-12-03 00:57:10.857695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.360 [2024-12-03 00:57:10.857708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.360 [2024-12-03 00:57:10.860392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.360 [2024-12-03 00:57:10.860506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.860551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.860565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.360 [2024-12-03 00:57:10.860575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.360 [2024-12-03 00:57:10.860588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.360 [2024-12-03 00:57:10.860600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.360 [2024-12-03 00:57:10.860607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.360 [2024-12-03 00:57:10.860615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.360 [2024-12-03 00:57:10.860628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.360 [2024-12-03 00:57:10.867538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.360 [2024-12-03 00:57:10.867634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.867676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.867690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.360 [2024-12-03 00:57:10.867698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.360 [2024-12-03 00:57:10.867712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.360 [2024-12-03 00:57:10.867738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.360 [2024-12-03 00:57:10.867747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.360 [2024-12-03 00:57:10.867754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.360 [2024-12-03 00:57:10.867766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.360 [2024-12-03 00:57:10.870480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.360 [2024-12-03 00:57:10.870572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.870615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.360 [2024-12-03 00:57:10.870630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.360 [2024-12-03 00:57:10.870639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.360 [2024-12-03 00:57:10.870652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.360 [2024-12-03 00:57:10.870664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.360 [2024-12-03 00:57:10.870671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.360 [2024-12-03 00:57:10.870679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.360 [2024-12-03 00:57:10.870692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.620 [2024-12-03 00:57:10.877605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.620 [2024-12-03 00:57:10.877697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.620 [2024-12-03 00:57:10.877738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.620 [2024-12-03 00:57:10.877753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.620 [2024-12-03 00:57:10.877762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.620 [2024-12-03 00:57:10.877775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.620 [2024-12-03 00:57:10.877825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.620 [2024-12-03 00:57:10.877834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.620 [2024-12-03 00:57:10.877842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.621 [2024-12-03 00:57:10.877855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.880541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.621 [2024-12-03 00:57:10.880643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.880684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.880699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.621 [2024-12-03 00:57:10.880707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.880721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.880732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.880739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.880747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.621 [2024-12-03 00:57:10.880759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.887670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.621 [2024-12-03 00:57:10.887758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.887798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.887813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.621 [2024-12-03 00:57:10.887822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.887835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.887860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.887868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.887875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.621 [2024-12-03 00:57:10.887887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.890625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.621 [2024-12-03 00:57:10.890731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.890773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.890802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.621 [2024-12-03 00:57:10.890811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.890824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.890836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.890843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.890850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.621 [2024-12-03 00:57:10.890893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.897715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.621 [2024-12-03 00:57:10.897829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.897888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.897903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.621 [2024-12-03 00:57:10.897912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.897934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.897961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.897970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.897977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.621 [2024-12-03 00:57:10.898006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.900702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.621 [2024-12-03 00:57:10.900790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.900830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.900844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.621 [2024-12-03 00:57:10.900853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.900866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.900878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.900885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.900892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.621 [2024-12-03 00:57:10.900904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.907798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.621 [2024-12-03 00:57:10.907886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.907927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.907941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.621 [2024-12-03 00:57:10.907949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.907962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.907989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.907997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.908005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.621 [2024-12-03 00:57:10.908017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.910747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.621 [2024-12-03 00:57:10.910852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.910893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.910908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.621 [2024-12-03 00:57:10.910916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.910930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.910959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.910969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.910977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.621 [2024-12-03 00:57:10.910989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.917842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.621 [2024-12-03 00:57:10.917929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.917970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.917985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.621 [2024-12-03 00:57:10.917993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.918006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.918032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.918040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.918047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.621 [2024-12-03 00:57:10.918059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.920807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.621 [2024-12-03 00:57:10.920893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.920934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.621 [2024-12-03 00:57:10.920948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.621 [2024-12-03 00:57:10.920956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.621 [2024-12-03 00:57:10.920969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.621 [2024-12-03 00:57:10.920996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.621 [2024-12-03 00:57:10.921005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.621 [2024-12-03 00:57:10.921013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.621 [2024-12-03 00:57:10.921025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.621 [2024-12-03 00:57:10.927887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.621 [2024-12-03 00:57:10.927975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.622 [2024-12-03 00:57:10.928016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.622 [2024-12-03 00:57:10.928030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea3830 with addr=10.0.0.2, port=4420 00:22:58.622 [2024-12-03 00:57:10.928038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3830 is same with the state(5) to be set 00:22:58.622 [2024-12-03 00:57:10.928052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea3830 (9): Bad file descriptor 00:22:58.622 [2024-12-03 00:57:10.928077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.622 [2024-12-03 00:57:10.928085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.622 [2024-12-03 00:57:10.928092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.622 [2024-12-03 00:57:10.928104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.622 [2024-12-03 00:57:10.930865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.622 [2024-12-03 00:57:10.930969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.622 [2024-12-03 00:57:10.931011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.622 [2024-12-03 00:57:10.931026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8e830 with addr=10.0.0.3, port=4420 00:22:58.622 [2024-12-03 00:57:10.931035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e830 is same with the state(5) to be set 00:22:58.622 [2024-12-03 00:57:10.931048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8e830 (9): Bad file descriptor 00:22:58.622 [2024-12-03 00:57:10.931076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.622 [2024-12-03 00:57:10.931086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.622 [2024-12-03 00:57:10.931094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.622 [2024-12-03 00:57:10.931135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.622 [2024-12-03 00:57:10.936118] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:58.622 [2024-12-03 00:57:10.936160] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.622 [2024-12-03 00:57:10.936178] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.622 [2024-12-03 00:57:10.936209] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:58.622 [2024-12-03 00:57:10.936221] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.622 [2024-12-03 00:57:10.936233] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.622 [2024-12-03 00:57:11.022179] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.622 [2024-12-03 00:57:11.022255] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:59.583 00:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@68 -- # sort 00:22:59.583 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@68 -- # xargs 00:22:59.583 00:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@64 -- # sort 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:59.583 00:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.583 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@64 -- # xargs 00:22:59.583 00:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:59.583 00:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.583 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.583 00:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:59.583 00:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.583 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.583 00:57:11 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.583 00:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.583 00:57:12 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.583 00:57:12 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:59.583 00:57:12 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:59.583 00:57:12 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:59.583 00:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.583 00:57:12 -- common/autotest_common.sh@10 -- # set +x 00:22:59.583 00:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.583 00:57:12 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:59.583 00:57:12 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:59.842 00:57:12 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:59.842 00:57:12 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:59.842 00:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.842 00:57:12 -- common/autotest_common.sh@10 -- # set +x 00:22:59.842 00:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.842 00:57:12 -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:59.842 [2024-12-03 00:57:12.183045] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:00.778 00:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:00.778 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@80 -- # sort 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@80 -- # xargs 00:23:00.778 00:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:00.778 00:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.778 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@68 -- # sort 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@68 -- # xargs 00:23:00.778 00:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@64 -- # sort 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:00.778 00:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@64 -- # xargs 00:23:00.778 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:00.778 00:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:00.778 00:57:13 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:00.778 00:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.778 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:01.037 00:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.037 00:57:13 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:01.037 00:57:13 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:01.037 00:57:13 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:01.037 00:57:13 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:01.037 00:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.037 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:01.037 00:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.037 00:57:13 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:01.037 00:57:13 -- common/autotest_common.sh@650 -- # local es=0 00:23:01.037 00:57:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:01.037 00:57:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:01.037 00:57:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.037 00:57:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:01.037 00:57:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.037 00:57:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:01.037 00:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.037 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:01.037 [2024-12-03 00:57:13.342359] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:01.037 2024/12/03 00:57:13 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:01.037 request: 00:23:01.037 { 00:23:01.037 "method": "bdev_nvme_start_mdns_discovery", 00:23:01.037 "params": { 00:23:01.037 "name": "mdns", 00:23:01.037 "svcname": "_nvme-disc._http", 00:23:01.037 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:01.037 } 00:23:01.037 } 00:23:01.037 Got JSON-RPC error response 00:23:01.037 GoRPCClient: error on JSON-RPC call 00:23:01.037 00:57:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:01.037 00:57:13 -- common/autotest_common.sh@653 -- # es=1 00:23:01.037 00:57:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:01.037 00:57:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:01.037 00:57:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:01.037 00:57:13 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:01.296 [2024-12-03 00:57:13.730790] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:01.555 [2024-12-03 00:57:13.830788] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:01.555 [2024-12-03 00:57:13.930792] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:01.555 [2024-12-03 00:57:13.930812] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:01.555 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:01.555 cookie is 0 00:23:01.555 is_local: 1 00:23:01.555 our_own: 0 00:23:01.555 wide_area: 0 00:23:01.555 multicast: 1 00:23:01.555 cached: 1 00:23:01.555 [2024-12-03 00:57:14.030793] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:01.555 [2024-12-03 00:57:14.030814] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:01.555 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:01.555 cookie is 0 00:23:01.555 is_local: 1 00:23:01.555 our_own: 0 00:23:01.555 wide_area: 0 00:23:01.555 multicast: 1 00:23:01.555 cached: 1 00:23:02.491 [2024-12-03 00:57:14.938864] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:02.491 [2024-12-03 00:57:14.938887] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:02.491 [2024-12-03 00:57:14.938903] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:02.750 [2024-12-03 00:57:15.024949] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:02.750 [2024-12-03 00:57:15.038804] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:02.750 [2024-12-03 00:57:15.038824] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:02.750 [2024-12-03 00:57:15.038838] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:02.750 [2024-12-03 00:57:15.089675] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:02.750 [2024-12-03 00:57:15.089702] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:02.750 [2024-12-03 00:57:15.124777] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:02.750 [2024-12-03 00:57:15.183356] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:02.750 [2024-12-03 00:57:15.183383] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:06.038 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.038 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@80 -- # sort 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@80 -- # xargs 00:23:06.038 00:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:06.038 00:57:18 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:06.039 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.039 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@76 -- # xargs 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@76 -- # sort 00:23:06.039 00:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.039 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.039 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.039 00:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.039 00:57:18 -- common/autotest_common.sh@650 -- # local es=0 00:23:06.039 00:57:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.039 00:57:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:06.039 00:57:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.039 00:57:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:06.039 00:57:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.039 00:57:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.039 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.039 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.039 [2024-12-03 00:57:18.522976] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:06.039 2024/12/03 00:57:18 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:06.039 request: 00:23:06.039 { 00:23:06.039 "method": "bdev_nvme_start_mdns_discovery", 00:23:06.039 "params": { 00:23:06.039 "name": "cdc", 00:23:06.039 "svcname": "_nvme-disc._tcp", 00:23:06.039 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:06.039 } 00:23:06.039 } 00:23:06.039 Got JSON-RPC error response 00:23:06.039 GoRPCClient: error on JSON-RPC call 00:23:06.039 00:57:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:06.039 00:57:18 -- common/autotest_common.sh@653 -- # es=1 00:23:06.039 00:57:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:06.039 00:57:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:06.039 00:57:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:06.039 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@76 -- # sort 00:23:06.039 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.039 00:57:18 -- host/mdns_discovery.sh@76 -- # xargs 00:23:06.039 00:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.296 00:57:18 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:06.296 00:57:18 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:06.296 00:57:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.296 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.296 00:57:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.296 00:57:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.296 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.296 00:57:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.296 00:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.297 00:57:18 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:06.297 00:57:18 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:06.297 00:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.297 00:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.297 00:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.297 00:57:18 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:06.297 00:57:18 -- host/mdns_discovery.sh@197 -- # kill 98439 00:23:06.297 00:57:18 -- host/mdns_discovery.sh@200 -- # wait 98439 00:23:06.297 [2024-12-03 00:57:18.744499] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:06.555 00:57:18 -- host/mdns_discovery.sh@201 -- # kill 98526 00:23:06.555 Got SIGTERM, quitting. 00:23:06.555 00:57:18 -- host/mdns_discovery.sh@202 -- # kill 98475 00:23:06.555 00:57:18 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:06.555 00:57:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:06.555 00:57:18 -- nvmf/common.sh@116 -- # sync 00:23:06.555 Got SIGTERM, quitting. 00:23:06.555 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:06.555 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:06.555 avahi-daemon 0.8 exiting. 00:23:06.555 00:57:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:06.555 00:57:18 -- nvmf/common.sh@119 -- # set +e 00:23:06.555 00:57:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:06.555 00:57:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:06.555 rmmod nvme_tcp 00:23:06.555 rmmod nvme_fabrics 00:23:06.555 rmmod nvme_keyring 00:23:06.555 00:57:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:06.555 00:57:18 -- nvmf/common.sh@123 -- # set -e 00:23:06.555 00:57:18 -- nvmf/common.sh@124 -- # return 0 00:23:06.555 00:57:18 -- nvmf/common.sh@477 -- # '[' -n 98407 ']' 00:23:06.555 00:57:18 -- nvmf/common.sh@478 -- # killprocess 98407 00:23:06.555 00:57:18 -- common/autotest_common.sh@936 -- # '[' -z 98407 ']' 00:23:06.555 00:57:18 -- common/autotest_common.sh@940 -- # kill -0 98407 00:23:06.555 00:57:18 -- common/autotest_common.sh@941 -- # uname 00:23:06.555 00:57:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:06.555 00:57:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98407 00:23:06.555 00:57:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:06.555 00:57:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:06.555 00:57:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98407' 00:23:06.555 killing process with pid 98407 00:23:06.555 00:57:18 -- common/autotest_common.sh@955 -- # kill 98407 00:23:06.555 00:57:18 -- common/autotest_common.sh@960 -- # wait 98407 00:23:06.814 00:57:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:06.814 00:57:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:06.814 00:57:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:06.814 00:57:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.814 00:57:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:06.814 00:57:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.814 00:57:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.814 00:57:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.814 00:57:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:06.814 00:23:06.814 real 0m19.987s 00:23:06.814 user 0m39.444s 00:23:06.814 sys 0m1.994s 00:23:06.814 00:57:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:06.814 00:57:19 -- common/autotest_common.sh@10 -- # set +x 00:23:06.814 ************************************ 00:23:06.814 END TEST nvmf_mdns_discovery 00:23:06.814 ************************************ 00:23:06.814 00:57:19 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:06.814 00:57:19 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:06.814 00:57:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:06.814 00:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:06.814 00:57:19 -- common/autotest_common.sh@10 -- # set +x 00:23:06.814 ************************************ 00:23:06.814 START TEST nvmf_multipath 00:23:06.814 ************************************ 00:23:06.814 00:57:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:07.073 * Looking for test storage... 00:23:07.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:07.073 00:57:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:07.073 00:57:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:07.073 00:57:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:07.073 00:57:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:07.073 00:57:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:07.073 00:57:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:07.073 00:57:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:07.073 00:57:19 -- scripts/common.sh@335 -- # IFS=.-: 00:23:07.074 00:57:19 -- scripts/common.sh@335 -- # read -ra ver1 00:23:07.074 00:57:19 -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.074 00:57:19 -- scripts/common.sh@336 -- # read -ra ver2 00:23:07.074 00:57:19 -- scripts/common.sh@337 -- # local 'op=<' 00:23:07.074 00:57:19 -- scripts/common.sh@339 -- # ver1_l=2 00:23:07.074 00:57:19 -- scripts/common.sh@340 -- # ver2_l=1 00:23:07.074 00:57:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:07.074 00:57:19 -- scripts/common.sh@343 -- # case "$op" in 00:23:07.074 00:57:19 -- scripts/common.sh@344 -- # : 1 00:23:07.074 00:57:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:07.074 00:57:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.074 00:57:19 -- scripts/common.sh@364 -- # decimal 1 00:23:07.074 00:57:19 -- scripts/common.sh@352 -- # local d=1 00:23:07.074 00:57:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.074 00:57:19 -- scripts/common.sh@354 -- # echo 1 00:23:07.074 00:57:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:07.074 00:57:19 -- scripts/common.sh@365 -- # decimal 2 00:23:07.074 00:57:19 -- scripts/common.sh@352 -- # local d=2 00:23:07.074 00:57:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.074 00:57:19 -- scripts/common.sh@354 -- # echo 2 00:23:07.074 00:57:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:07.074 00:57:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:07.074 00:57:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:07.074 00:57:19 -- scripts/common.sh@367 -- # return 0 00:23:07.074 00:57:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.074 00:57:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.074 --rc genhtml_branch_coverage=1 00:23:07.074 --rc genhtml_function_coverage=1 00:23:07.074 --rc genhtml_legend=1 00:23:07.074 --rc geninfo_all_blocks=1 00:23:07.074 --rc geninfo_unexecuted_blocks=1 00:23:07.074 00:23:07.074 ' 00:23:07.074 00:57:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.074 --rc genhtml_branch_coverage=1 00:23:07.074 --rc genhtml_function_coverage=1 00:23:07.074 --rc genhtml_legend=1 00:23:07.074 --rc geninfo_all_blocks=1 00:23:07.074 --rc geninfo_unexecuted_blocks=1 00:23:07.074 00:23:07.074 ' 00:23:07.074 00:57:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.074 --rc genhtml_branch_coverage=1 00:23:07.074 --rc genhtml_function_coverage=1 00:23:07.074 --rc genhtml_legend=1 00:23:07.074 --rc geninfo_all_blocks=1 00:23:07.074 --rc geninfo_unexecuted_blocks=1 00:23:07.074 00:23:07.074 ' 00:23:07.074 00:57:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.074 --rc genhtml_branch_coverage=1 00:23:07.074 --rc genhtml_function_coverage=1 00:23:07.074 --rc genhtml_legend=1 00:23:07.074 --rc geninfo_all_blocks=1 00:23:07.074 --rc geninfo_unexecuted_blocks=1 00:23:07.074 00:23:07.074 ' 00:23:07.074 00:57:19 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.074 00:57:19 -- nvmf/common.sh@7 -- # uname -s 00:23:07.074 00:57:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.074 00:57:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.074 00:57:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.074 00:57:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.074 00:57:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.074 00:57:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.074 00:57:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.074 00:57:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.074 00:57:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.074 00:57:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.074 00:57:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:23:07.074 00:57:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:23:07.074 00:57:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.074 00:57:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.074 00:57:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:07.074 00:57:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.074 00:57:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.074 00:57:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.074 00:57:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.074 00:57:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.074 00:57:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.074 00:57:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.074 00:57:19 -- paths/export.sh@5 -- # export PATH 00:23:07.074 00:57:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.074 00:57:19 -- nvmf/common.sh@46 -- # : 0 00:23:07.074 00:57:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:07.074 00:57:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:07.074 00:57:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:07.074 00:57:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.074 00:57:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.074 00:57:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:07.074 00:57:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:07.074 00:57:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:07.074 00:57:19 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.074 00:57:19 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.074 00:57:19 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.074 00:57:19 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:07.074 00:57:19 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.074 00:57:19 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:07.074 00:57:19 -- host/multipath.sh@30 -- # nvmftestinit 00:23:07.074 00:57:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:07.074 00:57:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.074 00:57:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:07.074 00:57:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:07.074 00:57:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:07.074 00:57:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.074 00:57:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.074 00:57:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.074 00:57:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:07.074 00:57:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:07.074 00:57:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:07.074 00:57:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:07.074 00:57:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:07.074 00:57:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:07.074 00:57:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.074 00:57:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.074 00:57:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:07.074 00:57:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:07.074 00:57:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:07.074 00:57:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:07.074 00:57:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:07.074 00:57:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.074 00:57:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:07.074 00:57:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:07.074 00:57:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:07.074 00:57:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:07.074 00:57:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:07.074 00:57:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:07.074 Cannot find device "nvmf_tgt_br" 00:23:07.074 00:57:19 -- nvmf/common.sh@154 -- # true 00:23:07.074 00:57:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.074 Cannot find device "nvmf_tgt_br2" 00:23:07.074 00:57:19 -- nvmf/common.sh@155 -- # true 00:23:07.074 00:57:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:07.074 00:57:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:07.074 Cannot find device "nvmf_tgt_br" 00:23:07.074 00:57:19 -- nvmf/common.sh@157 -- # true 00:23:07.074 00:57:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:07.074 Cannot find device "nvmf_tgt_br2" 00:23:07.074 00:57:19 -- nvmf/common.sh@158 -- # true 00:23:07.074 00:57:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:07.333 00:57:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:07.333 00:57:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.333 00:57:19 -- nvmf/common.sh@161 -- # true 00:23:07.333 00:57:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.333 00:57:19 -- nvmf/common.sh@162 -- # true 00:23:07.333 00:57:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.333 00:57:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.333 00:57:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.333 00:57:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.333 00:57:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.333 00:57:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.333 00:57:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.333 00:57:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.333 00:57:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:07.333 00:57:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:07.333 00:57:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:07.333 00:57:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:07.333 00:57:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:07.333 00:57:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.333 00:57:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.333 00:57:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.333 00:57:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:07.333 00:57:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:07.333 00:57:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.333 00:57:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.333 00:57:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.333 00:57:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.333 00:57:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.333 00:57:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:07.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:07.333 00:23:07.333 --- 10.0.0.2 ping statistics --- 00:23:07.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.333 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:07.333 00:57:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:07.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:07.334 00:23:07.334 --- 10.0.0.3 ping statistics --- 00:23:07.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.334 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:07.334 00:57:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:07.334 00:23:07.334 --- 10.0.0.1 ping statistics --- 00:23:07.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.334 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:07.334 00:57:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.334 00:57:19 -- nvmf/common.sh@421 -- # return 0 00:23:07.334 00:57:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:07.334 00:57:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.334 00:57:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:07.334 00:57:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:07.334 00:57:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.334 00:57:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:07.334 00:57:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:07.334 00:57:19 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:07.334 00:57:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:07.334 00:57:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.334 00:57:19 -- common/autotest_common.sh@10 -- # set +x 00:23:07.334 00:57:19 -- nvmf/common.sh@469 -- # nvmfpid=99041 00:23:07.334 00:57:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:07.334 00:57:19 -- nvmf/common.sh@470 -- # waitforlisten 99041 00:23:07.334 00:57:19 -- common/autotest_common.sh@829 -- # '[' -z 99041 ']' 00:23:07.334 00:57:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.334 00:57:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.334 00:57:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.334 00:57:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.334 00:57:19 -- common/autotest_common.sh@10 -- # set +x 00:23:07.591 [2024-12-03 00:57:19.897578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:07.591 [2024-12-03 00:57:19.897671] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.591 [2024-12-03 00:57:20.042672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:07.849 [2024-12-03 00:57:20.128213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:07.849 [2024-12-03 00:57:20.128669] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.849 [2024-12-03 00:57:20.128818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.849 [2024-12-03 00:57:20.128932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.849 [2024-12-03 00:57:20.129132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.849 [2024-12-03 00:57:20.129150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.414 00:57:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.414 00:57:20 -- common/autotest_common.sh@862 -- # return 0 00:23:08.414 00:57:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:08.414 00:57:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.414 00:57:20 -- common/autotest_common.sh@10 -- # set +x 00:23:08.672 00:57:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.672 00:57:20 -- host/multipath.sh@33 -- # nvmfapp_pid=99041 00:23:08.672 00:57:20 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:08.930 [2024-12-03 00:57:21.234021] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.930 00:57:21 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:09.188 Malloc0 00:23:09.188 00:57:21 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:09.446 00:57:21 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.703 00:57:22 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.960 [2024-12-03 00:57:22.354154] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.960 00:57:22 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:10.218 [2024-12-03 00:57:22.638596] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:10.218 00:57:22 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:10.218 00:57:22 -- host/multipath.sh@44 -- # bdevperf_pid=99149 00:23:10.218 00:57:22 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.218 00:57:22 -- host/multipath.sh@47 -- # waitforlisten 99149 /var/tmp/bdevperf.sock 00:23:10.218 00:57:22 -- common/autotest_common.sh@829 -- # '[' -z 99149 ']' 00:23:10.218 00:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.218 00:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.218 00:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.218 00:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.218 00:57:22 -- common/autotest_common.sh@10 -- # set +x 00:23:11.152 00:57:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.152 00:57:23 -- common/autotest_common.sh@862 -- # return 0 00:23:11.152 00:57:23 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:11.411 00:57:23 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:11.978 Nvme0n1 00:23:11.978 00:57:24 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:12.236 Nvme0n1 00:23:12.236 00:57:24 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.236 00:57:24 -- host/multipath.sh@78 -- # sleep 1 00:23:13.170 00:57:25 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:13.171 00:57:25 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:13.429 00:57:25 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.688 00:57:25 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:13.688 00:57:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:13.688 00:57:25 -- host/multipath.sh@65 -- # dtrace_pid=99233 00:23:13.688 00:57:25 -- host/multipath.sh@66 -- # sleep 6 00:23:20.255 00:57:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:20.255 00:57:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:20.255 00:57:32 -- host/multipath.sh@67 -- # active_port=4421 00:23:20.255 00:57:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.255 Attaching 4 probes... 00:23:20.255 @path[10.0.0.2, 4421]: 21776 00:23:20.255 @path[10.0.0.2, 4421]: 22439 00:23:20.255 @path[10.0.0.2, 4421]: 22180 00:23:20.255 @path[10.0.0.2, 4421]: 22107 00:23:20.255 @path[10.0.0.2, 4421]: 22271 00:23:20.255 00:57:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:20.255 00:57:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:20.255 00:57:32 -- host/multipath.sh@69 -- # sed -n 1p 00:23:20.255 00:57:32 -- host/multipath.sh@69 -- # port=4421 00:23:20.255 00:57:32 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.255 00:57:32 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.255 00:57:32 -- host/multipath.sh@72 -- # kill 99233 00:23:20.255 00:57:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.255 00:57:32 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:20.255 00:57:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:20.255 00:57:32 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:20.255 00:57:32 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:20.255 00:57:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.255 00:57:32 -- host/multipath.sh@65 -- # dtrace_pid=99369 00:23:20.255 00:57:32 -- host/multipath.sh@66 -- # sleep 6 00:23:26.821 00:57:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:26.821 00:57:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:26.821 00:57:38 -- host/multipath.sh@67 -- # active_port=4420 00:23:26.821 00:57:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.821 Attaching 4 probes... 00:23:26.821 @path[10.0.0.2, 4420]: 21564 00:23:26.821 @path[10.0.0.2, 4420]: 21856 00:23:26.821 @path[10.0.0.2, 4420]: 21895 00:23:26.821 @path[10.0.0.2, 4420]: 21951 00:23:26.821 @path[10.0.0.2, 4420]: 22109 00:23:26.821 00:57:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:26.821 00:57:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:26.821 00:57:38 -- host/multipath.sh@69 -- # sed -n 1p 00:23:26.821 00:57:38 -- host/multipath.sh@69 -- # port=4420 00:23:26.821 00:57:38 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.821 00:57:38 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.821 00:57:38 -- host/multipath.sh@72 -- # kill 99369 00:23:26.821 00:57:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.821 00:57:39 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:26.821 00:57:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:26.821 00:57:39 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:27.079 00:57:39 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:27.079 00:57:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:27.079 00:57:39 -- host/multipath.sh@65 -- # dtrace_pid=99504 00:23:27.079 00:57:39 -- host/multipath.sh@66 -- # sleep 6 00:23:33.641 00:57:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:33.641 00:57:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:33.641 00:57:45 -- host/multipath.sh@67 -- # active_port=4421 00:23:33.641 00:57:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.641 Attaching 4 probes... 00:23:33.641 @path[10.0.0.2, 4421]: 16592 00:23:33.641 @path[10.0.0.2, 4421]: 22210 00:23:33.641 @path[10.0.0.2, 4421]: 22264 00:23:33.641 @path[10.0.0.2, 4421]: 22237 00:23:33.641 @path[10.0.0.2, 4421]: 22278 00:23:33.641 00:57:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:33.641 00:57:45 -- host/multipath.sh@69 -- # sed -n 1p 00:23:33.641 00:57:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:33.642 00:57:45 -- host/multipath.sh@69 -- # port=4421 00:23:33.642 00:57:45 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.642 00:57:45 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.642 00:57:45 -- host/multipath.sh@72 -- # kill 99504 00:23:33.642 00:57:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.642 00:57:45 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:33.642 00:57:45 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:33.642 00:57:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:33.900 00:57:46 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:33.900 00:57:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:33.900 00:57:46 -- host/multipath.sh@65 -- # dtrace_pid=99630 00:23:33.900 00:57:46 -- host/multipath.sh@66 -- # sleep 6 00:23:40.467 00:57:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:40.467 00:57:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:40.467 00:57:52 -- host/multipath.sh@67 -- # active_port= 00:23:40.467 00:57:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.467 Attaching 4 probes... 00:23:40.467 00:23:40.467 00:23:40.467 00:23:40.467 00:23:40.467 00:23:40.467 00:57:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:40.467 00:57:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:40.467 00:57:52 -- host/multipath.sh@69 -- # sed -n 1p 00:23:40.467 00:57:52 -- host/multipath.sh@69 -- # port= 00:23:40.467 00:57:52 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:40.467 00:57:52 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:40.467 00:57:52 -- host/multipath.sh@72 -- # kill 99630 00:23:40.467 00:57:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.467 00:57:52 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:40.467 00:57:52 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.467 00:57:52 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:40.726 00:57:53 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:40.726 00:57:53 -- host/multipath.sh@65 -- # dtrace_pid=99766 00:23:40.726 00:57:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.726 00:57:53 -- host/multipath.sh@66 -- # sleep 6 00:23:47.312 00:57:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:47.312 00:57:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:47.312 00:57:59 -- host/multipath.sh@67 -- # active_port=4421 00:23:47.312 00:57:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.312 Attaching 4 probes... 00:23:47.312 @path[10.0.0.2, 4421]: 21540 00:23:47.312 @path[10.0.0.2, 4421]: 21901 00:23:47.312 @path[10.0.0.2, 4421]: 21965 00:23:47.312 @path[10.0.0.2, 4421]: 22003 00:23:47.312 @path[10.0.0.2, 4421]: 22019 00:23:47.312 00:57:59 -- host/multipath.sh@69 -- # sed -n 1p 00:23:47.312 00:57:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:47.312 00:57:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:47.312 00:57:59 -- host/multipath.sh@69 -- # port=4421 00:23:47.312 00:57:59 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.312 00:57:59 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.312 00:57:59 -- host/multipath.sh@72 -- # kill 99766 00:23:47.312 00:57:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.312 00:57:59 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.312 [2024-12-03 00:57:59.616240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.312 [2024-12-03 00:57:59.616338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.313 [2024-12-03 00:57:59.616967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.616974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.616982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.616989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.616996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 [2024-12-03 00:57:59.617055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea370 is same with the state(5) to be set 00:23:47.314 00:57:59 -- host/multipath.sh@101 -- # sleep 1 00:23:48.260 00:58:00 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:48.260 00:58:00 -- host/multipath.sh@65 -- # dtrace_pid=99896 00:23:48.260 00:58:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:48.260 00:58:00 -- host/multipath.sh@66 -- # sleep 6 00:23:54.819 00:58:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:54.819 00:58:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:54.819 00:58:06 -- host/multipath.sh@67 -- # active_port=4420 00:23:54.819 00:58:06 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.819 Attaching 4 probes... 00:23:54.819 @path[10.0.0.2, 4420]: 20876 00:23:54.819 @path[10.0.0.2, 4420]: 21186 00:23:54.819 @path[10.0.0.2, 4420]: 21206 00:23:54.819 @path[10.0.0.2, 4420]: 21183 00:23:54.819 @path[10.0.0.2, 4420]: 21189 00:23:54.819 00:58:06 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:54.819 00:58:06 -- host/multipath.sh@69 -- # sed -n 1p 00:23:54.819 00:58:06 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:54.819 00:58:06 -- host/multipath.sh@69 -- # port=4420 00:23:54.819 00:58:06 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.819 00:58:06 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.819 00:58:06 -- host/multipath.sh@72 -- # kill 99896 00:23:54.819 00:58:06 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.819 00:58:06 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.819 [2024-12-03 00:58:07.114639] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.819 00:58:07 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.077 00:58:07 -- host/multipath.sh@111 -- # sleep 6 00:24:01.635 00:58:13 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:01.635 00:58:13 -- host/multipath.sh@65 -- # dtrace_pid=100094 00:24:01.635 00:58:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99041 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.635 00:58:13 -- host/multipath.sh@66 -- # sleep 6 00:24:06.896 00:58:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:06.896 00:58:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:07.461 00:58:19 -- host/multipath.sh@67 -- # active_port=4421 00:24:07.461 00:58:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.461 Attaching 4 probes... 00:24:07.461 @path[10.0.0.2, 4421]: 21197 00:24:07.461 @path[10.0.0.2, 4421]: 21545 00:24:07.461 @path[10.0.0.2, 4421]: 21579 00:24:07.461 @path[10.0.0.2, 4421]: 21551 00:24:07.461 @path[10.0.0.2, 4421]: 21524 00:24:07.461 00:58:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.461 00:58:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.461 00:58:19 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.461 00:58:19 -- host/multipath.sh@69 -- # port=4421 00:24:07.461 00:58:19 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:07.461 00:58:19 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:07.461 00:58:19 -- host/multipath.sh@72 -- # kill 100094 00:24:07.461 00:58:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.461 00:58:19 -- host/multipath.sh@114 -- # killprocess 99149 00:24:07.461 00:58:19 -- common/autotest_common.sh@936 -- # '[' -z 99149 ']' 00:24:07.461 00:58:19 -- common/autotest_common.sh@940 -- # kill -0 99149 00:24:07.461 00:58:19 -- common/autotest_common.sh@941 -- # uname 00:24:07.461 00:58:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.461 00:58:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99149 00:24:07.461 killing process with pid 99149 00:24:07.461 00:58:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:07.461 00:58:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:07.461 00:58:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99149' 00:24:07.461 00:58:19 -- common/autotest_common.sh@955 -- # kill 99149 00:24:07.461 00:58:19 -- common/autotest_common.sh@960 -- # wait 99149 00:24:07.461 Connection closed with partial response: 00:24:07.461 00:24:07.461 00:24:07.738 00:58:19 -- host/multipath.sh@116 -- # wait 99149 00:24:07.738 00:58:19 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:07.738 [2024-12-03 00:57:22.696876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:07.738 [2024-12-03 00:57:22.696968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99149 ] 00:24:07.738 [2024-12-03 00:57:22.833918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.738 [2024-12-03 00:57:22.896692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.738 Running I/O for 90 seconds... 00:24:07.738 [2024-12-03 00:57:32.697655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.738 [2024-12-03 00:57:32.697744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.697815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.738 [2024-12-03 00:57:32.697841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.697864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.738 [2024-12-03 00:57:32.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.697903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.738 [2024-12-03 00:57:32.697920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.697941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.738 [2024-12-03 00:57:32.697958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.697980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.738 [2024-12-03 00:57:32.697997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.698019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.738 [2024-12-03 00:57:32.698036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.738 [2024-12-03 00:57:32.698057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.698946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.698968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.698985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.699023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.699333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.699350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.700556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.700603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.700645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.700686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.739 [2024-12-03 00:57:32.700724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.700766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.739 [2024-12-03 00:57:32.700804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.739 [2024-12-03 00:57:32.700826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.700844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.700866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.700882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.700903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.700923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.700946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.700976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.701874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.701975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.701993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.702924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.740 [2024-12-03 00:57:32.702964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.740 [2024-12-03 00:57:32.703002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.740 [2024-12-03 00:57:32.703024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.703643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.703960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.703983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.741 [2024-12-03 00:57:32.704405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.741 [2024-12-03 00:57:32.704561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.741 [2024-12-03 00:57:32.704577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.704616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.704711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.704751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.704791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.704838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.704877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.704922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.704962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.704984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.705598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.705774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.705853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.705932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.705972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.742 [2024-12-03 00:57:32.706640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.742 [2024-12-03 00:57:32.706663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.742 [2024-12-03 00:57:32.706691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.706733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.706771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.706811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.706850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.706889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.706928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.706967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.706989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.743 [2024-12-03 00:57:32.707928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.743 [2024-12-03 00:57:32.707969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.743 [2024-12-03 00:57:32.707991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.708008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.708047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.708087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.708125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.708214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.708254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.708500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.708517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.709792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.709973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.709990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.710029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.710068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.710108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.744 [2024-12-03 00:57:32.710148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.710187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.744 [2024-12-03 00:57:32.710274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.744 [2024-12-03 00:57:32.710297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.710766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.710855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.710893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.710933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.710972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.710994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.723112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.723204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.723447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.723494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.723572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.723822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.723840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.724392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.724474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.724515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.724554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.724593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.745 [2024-12-03 00:57:32.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.745 [2024-12-03 00:57:32.724653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.745 [2024-12-03 00:57:32.724671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.724710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.724749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.724802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.724844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.724884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.724923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.724961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.724999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.725904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.725964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.725982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.746 [2024-12-03 00:57:32.726020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.726059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.726098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.726137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.726175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.746 [2024-12-03 00:57:32.726231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.746 [2024-12-03 00:57:32.726255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.726272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.726577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.726772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.726959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.726976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.727071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.727110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.727946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.727974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.728024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.728064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.728105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.728144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.728184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.747 [2024-12-03 00:57:32.728224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.747 [2024-12-03 00:57:32.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.747 [2024-12-03 00:57:32.728288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.728348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.728504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.728546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.728588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.728628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.728833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.728956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.728979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.748 [2024-12-03 00:57:32.729773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.748 [2024-12-03 00:57:32.729935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.748 [2024-12-03 00:57:32.729965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.729983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.730032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.730115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.730967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.730989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.731712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.731971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.731993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.732009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.732032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.732049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.732070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.749 [2024-12-03 00:57:32.732088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.749 [2024-12-03 00:57:32.732110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.749 [2024-12-03 00:57:32.732135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.732292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.732410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.732496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.732615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.732886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.732964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.733156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.733204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.733357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.733655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.750 [2024-12-03 00:57:32.733746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.750 [2024-12-03 00:57:32.733787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.750 [2024-12-03 00:57:32.733824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.733840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.733861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.733877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.733900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.733917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.734917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.734966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.734989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.751 [2024-12-03 00:57:32.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.735971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.735989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.736011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.736029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.736050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.736067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.751 [2024-12-03 00:57:32.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.751 [2024-12-03 00:57:32.736105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.736830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.736967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.736984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.737712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.737886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.737962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.737983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.737999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.738020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.752 [2024-12-03 00:57:32.738046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.738071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.738088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.752 [2024-12-03 00:57:32.738110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.752 [2024-12-03 00:57:32.738127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.738247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.738292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.738333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.738421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.738771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.738811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.738833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.738850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.746838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.746873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.746899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.746917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.746938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.746957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.746979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.746995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.747033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.747169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.747208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.747324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.753 [2024-12-03 00:57:32.747601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.753 [2024-12-03 00:57:32.747856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.753 [2024-12-03 00:57:32.747873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.747894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.747911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.747932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.747948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.747971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.747987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.748102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.748344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.748436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.748493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.748534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.748557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.748575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.749629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.749709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.749804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.749846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.749975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.749996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.754 [2024-12-03 00:57:32.750014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.750036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.750053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.750075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.750092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.750114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.754 [2024-12-03 00:57:32.750131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.754 [2024-12-03 00:57:32.750152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.750850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.750974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.750997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.755 [2024-12-03 00:57:32.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.751645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.751663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.752161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.752188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.752216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.752235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.752257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.752275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.752298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.755 [2024-12-03 00:57:32.752314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.755 [2024-12-03 00:57:32.752337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.752392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.752603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.752685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.752898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.752974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.752996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.753013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.753100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.753370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.753483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.753700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.756 [2024-12-03 00:57:32.753795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.756 [2024-12-03 00:57:32.753817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.756 [2024-12-03 00:57:32.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.753856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.753872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.753893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.753910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.753932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.753949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.753970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.754308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.754628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.754668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.754854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.754970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.754992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.755009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.755047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.755087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.755125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.757 [2024-12-03 00:57:32.755163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.755210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.755250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.755926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.755973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.755995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.756013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.756035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.756052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.756073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.756111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.757 [2024-12-03 00:57:32.756128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.757 [2024-12-03 00:57:32.756150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.756913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.756972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.756989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.758 [2024-12-03 00:57:32.757638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.758 [2024-12-03 00:57:32.757735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.758 [2024-12-03 00:57:32.757752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.757775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.757792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.757837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.757868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.757887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.757908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.757926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.757948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.757965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.757987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.758003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.758041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.758119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.758927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.758966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.758988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.759 [2024-12-03 00:57:32.759625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.759 [2024-12-03 00:57:32.759802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.759 [2024-12-03 00:57:32.759827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.759851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.759889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.759906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.759928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.759945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.759966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.759983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.760174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.760290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.760339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.760470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.760711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.760954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.760976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.761031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.760 [2024-12-03 00:57:32.761191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.760 [2024-12-03 00:57:32.761375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.760 [2024-12-03 00:57:32.761392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.761425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.761445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.761468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.761484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.761506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.761523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.761545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.761568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.762879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.762956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.762978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.762994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.761 [2024-12-03 00:57:32.763566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.761 [2024-12-03 00:57:32.763674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.761 [2024-12-03 00:57:32.763693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.763981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.763998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:32.764657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.764680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.764698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:32.765204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:32.765232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.243686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.243755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:39.243796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.243847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.243887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.243927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:39.243964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.243986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.244002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.244024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.244040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.244062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.762 [2024-12-03 00:57:39.244078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.244119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.244137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.244371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.762 [2024-12-03 00:57:39.244399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.762 [2024-12-03 00:57:39.244454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.244474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.244515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.244556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.244596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.244967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.244989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.245542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.245739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.763 [2024-12-03 00:57:39.245790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.245972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.245995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.246012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.246034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.246051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.246075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.763 [2024-12-03 00:57:39.246092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.763 [2024-12-03 00:57:39.246115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.246133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.246174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.246534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.246973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.246991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.247184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.247227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.247312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.247761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.247889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.764 [2024-12-03 00:57:39.247930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.247971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.764 [2024-12-03 00:57:39.247995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.764 [2024-12-03 00:57:39.248012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.248054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.248136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.248263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.248520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.248618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.248958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.248985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.765 [2024-12-03 00:57:39.249842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.765 [2024-12-03 00:57:39.249869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.765 [2024-12-03 00:57:39.249885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:39.249912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:39.249929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:39.249956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:39.249974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:39.250001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:39.250018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.303505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:46.303631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.303958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.303987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:46.304034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:46.304141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:46.304220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:46.304375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.304971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.304994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.766 [2024-12-03 00:57:46.305128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.766 [2024-12-03 00:57:46.305402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.766 [2024-12-03 00:57:46.305434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.305774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.305813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.305836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.305853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.306330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.306424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.306570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.306843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.306889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.306934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.306961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.306979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.307307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.767 [2024-12-03 00:57:46.307353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.767 [2024-12-03 00:57:46.307578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.767 [2024-12-03 00:57:46.307596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.307639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.307683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.307728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.307783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.307827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.307873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.307917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.307962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.307988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.308950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.308977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.308994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.309039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.309082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.309126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.309172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.309215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:46.309265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:46.309293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:46.309311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:59.616703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.768 [2024-12-03 00:57:59.616777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.768 [2024-12-03 00:57:59.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.768 [2024-12-03 00:57:59.616870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.616895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.616944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.616970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.616988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.617028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.617103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.617140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.617177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.617289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.617365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.617928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.617945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.618875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.769 [2024-12-03 00:57:59.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.769 [2024-12-03 00:57:59.618933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.769 [2024-12-03 00:57:59.618947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.618963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.618977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.618992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.619962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.619977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.619991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.620007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.620021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.620036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.620050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.620066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.770 [2024-12-03 00:57:59.620080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.770 [2024-12-03 00:57:59.620096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.770 [2024-12-03 00:57:59.620111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.620933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.620969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.620986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.621000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.621030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.771 [2024-12-03 00:57:59.621120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.771 [2024-12-03 00:57:59.621358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.771 [2024-12-03 00:57:59.621379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.772 [2024-12-03 00:57:59.621420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.621455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.621484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.621514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.772 [2024-12-03 00:57:59.621545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.772 [2024-12-03 00:57:59.621575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.621605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.772 [2024-12-03 00:57:59.621634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.772 [2024-12-03 00:57:59.621670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.772 [2024-12-03 00:57:59.621701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.621730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.621751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.621766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.622043] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc6c060 was disconnected and freed. reset controller. 00:24:07.772 [2024-12-03 00:57:59.622179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.772 [2024-12-03 00:57:59.622205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.622243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.772 [2024-12-03 00:57:59.622258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.622273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.772 [2024-12-03 00:57:59.622286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.622301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.772 [2024-12-03 00:57:59.622314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.622329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.772 [2024-12-03 00:57:59.622343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.772 [2024-12-03 00:57:59.622365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7da00 is same with the state(5) to be set 00:24:07.772 [2024-12-03 00:57:59.623390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.772 [2024-12-03 00:57:59.623456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7da00 (9): Bad file descriptor 00:24:07.772 [2024-12-03 00:57:59.623754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.772 [2024-12-03 00:57:59.623817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.772 [2024-12-03 00:57:59.623847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7da00 with addr=10.0.0.2, port=4421 00:24:07.772 [2024-12-03 00:57:59.623864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7da00 is same with the state(5) to be set 00:24:07.772 [2024-12-03 00:57:59.624172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7da00 (9): Bad file descriptor 00:24:07.772 [2024-12-03 00:57:59.624234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.772 [2024-12-03 00:57:59.624257] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.772 [2024-12-03 00:57:59.624272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.772 [2024-12-03 00:57:59.624299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.772 [2024-12-03 00:57:59.624316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.772 [2024-12-03 00:58:09.672931] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:07.772 Received shutdown signal, test time was about 55.111561 seconds 00:24:07.772 00:24:07.772 Latency(us) 00:24:07.772 [2024-12-03T00:58:20.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.772 [2024-12-03T00:58:20.287Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.772 Verification LBA range: start 0x0 length 0x4000 00:24:07.772 Nvme0n1 : 55.11 12351.65 48.25 0.00 0.00 10347.63 129.40 7015926.69 00:24:07.772 [2024-12-03T00:58:20.287Z] =================================================================================================================== 00:24:07.772 [2024-12-03T00:58:20.287Z] Total : 12351.65 48.25 0.00 0.00 10347.63 129.40 7015926.69 00:24:07.772 00:58:19 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.772 00:58:20 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:07.772 00:58:20 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:07.772 00:58:20 -- host/multipath.sh@125 -- # nvmftestfini 00:24:07.772 00:58:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:07.772 00:58:20 -- nvmf/common.sh@116 -- # sync 00:24:08.031 00:58:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:08.031 00:58:20 -- nvmf/common.sh@119 -- # set +e 00:24:08.031 00:58:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:08.031 00:58:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:08.031 rmmod nvme_tcp 00:24:08.031 rmmod nvme_fabrics 00:24:08.031 rmmod nvme_keyring 00:24:08.031 00:58:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:08.031 00:58:20 -- nvmf/common.sh@123 -- # set -e 00:24:08.031 00:58:20 -- nvmf/common.sh@124 -- # return 0 00:24:08.031 00:58:20 -- nvmf/common.sh@477 -- # '[' -n 99041 ']' 00:24:08.031 00:58:20 -- nvmf/common.sh@478 -- # killprocess 99041 00:24:08.031 00:58:20 -- common/autotest_common.sh@936 -- # '[' -z 99041 ']' 00:24:08.031 00:58:20 -- common/autotest_common.sh@940 -- # kill -0 99041 00:24:08.031 00:58:20 -- common/autotest_common.sh@941 -- # uname 00:24:08.031 00:58:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.031 00:58:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99041 00:24:08.031 00:58:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:08.031 killing process with pid 99041 00:24:08.031 00:58:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:08.031 00:58:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99041' 00:24:08.031 00:58:20 -- common/autotest_common.sh@955 -- # kill 99041 00:24:08.031 00:58:20 -- common/autotest_common.sh@960 -- # wait 99041 00:24:08.290 00:58:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:08.290 00:58:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:08.290 00:58:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:08.290 00:58:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.290 00:58:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:08.290 00:58:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.290 00:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.290 00:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.290 00:58:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:08.290 ************************************ 00:24:08.290 END TEST nvmf_multipath 00:24:08.290 ************************************ 00:24:08.290 00:24:08.290 real 1m1.323s 00:24:08.290 user 2m51.905s 00:24:08.290 sys 0m14.102s 00:24:08.290 00:58:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:08.290 00:58:20 -- common/autotest_common.sh@10 -- # set +x 00:24:08.290 00:58:20 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:08.290 00:58:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:08.290 00:58:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:08.290 00:58:20 -- common/autotest_common.sh@10 -- # set +x 00:24:08.290 ************************************ 00:24:08.290 START TEST nvmf_timeout 00:24:08.290 ************************************ 00:24:08.290 00:58:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:08.290 * Looking for test storage... 00:24:08.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.290 00:58:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:08.290 00:58:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:08.290 00:58:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:08.550 00:58:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:08.550 00:58:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:08.550 00:58:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:08.550 00:58:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:08.550 00:58:20 -- scripts/common.sh@335 -- # IFS=.-: 00:24:08.550 00:58:20 -- scripts/common.sh@335 -- # read -ra ver1 00:24:08.550 00:58:20 -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.550 00:58:20 -- scripts/common.sh@336 -- # read -ra ver2 00:24:08.550 00:58:20 -- scripts/common.sh@337 -- # local 'op=<' 00:24:08.550 00:58:20 -- scripts/common.sh@339 -- # ver1_l=2 00:24:08.550 00:58:20 -- scripts/common.sh@340 -- # ver2_l=1 00:24:08.550 00:58:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:08.550 00:58:20 -- scripts/common.sh@343 -- # case "$op" in 00:24:08.550 00:58:20 -- scripts/common.sh@344 -- # : 1 00:24:08.550 00:58:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:08.550 00:58:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.550 00:58:20 -- scripts/common.sh@364 -- # decimal 1 00:24:08.550 00:58:20 -- scripts/common.sh@352 -- # local d=1 00:24:08.550 00:58:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.550 00:58:20 -- scripts/common.sh@354 -- # echo 1 00:24:08.550 00:58:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:08.550 00:58:20 -- scripts/common.sh@365 -- # decimal 2 00:24:08.550 00:58:20 -- scripts/common.sh@352 -- # local d=2 00:24:08.550 00:58:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.550 00:58:20 -- scripts/common.sh@354 -- # echo 2 00:24:08.550 00:58:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:08.550 00:58:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:08.550 00:58:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:08.550 00:58:20 -- scripts/common.sh@367 -- # return 0 00:24:08.550 00:58:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.550 00:58:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.550 --rc genhtml_branch_coverage=1 00:24:08.550 --rc genhtml_function_coverage=1 00:24:08.550 --rc genhtml_legend=1 00:24:08.550 --rc geninfo_all_blocks=1 00:24:08.550 --rc geninfo_unexecuted_blocks=1 00:24:08.550 00:24:08.550 ' 00:24:08.550 00:58:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.550 --rc genhtml_branch_coverage=1 00:24:08.550 --rc genhtml_function_coverage=1 00:24:08.550 --rc genhtml_legend=1 00:24:08.550 --rc geninfo_all_blocks=1 00:24:08.550 --rc geninfo_unexecuted_blocks=1 00:24:08.550 00:24:08.550 ' 00:24:08.550 00:58:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.550 --rc genhtml_branch_coverage=1 00:24:08.550 --rc genhtml_function_coverage=1 00:24:08.550 --rc genhtml_legend=1 00:24:08.550 --rc geninfo_all_blocks=1 00:24:08.550 --rc geninfo_unexecuted_blocks=1 00:24:08.550 00:24:08.550 ' 00:24:08.550 00:58:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.550 --rc genhtml_branch_coverage=1 00:24:08.550 --rc genhtml_function_coverage=1 00:24:08.550 --rc genhtml_legend=1 00:24:08.550 --rc geninfo_all_blocks=1 00:24:08.550 --rc geninfo_unexecuted_blocks=1 00:24:08.550 00:24:08.550 ' 00:24:08.550 00:58:20 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.550 00:58:20 -- nvmf/common.sh@7 -- # uname -s 00:24:08.550 00:58:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.550 00:58:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.550 00:58:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.550 00:58:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.550 00:58:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.550 00:58:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.550 00:58:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.550 00:58:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.550 00:58:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.550 00:58:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.550 00:58:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:24:08.550 00:58:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:24:08.550 00:58:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.550 00:58:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.550 00:58:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.550 00:58:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.550 00:58:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.550 00:58:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.550 00:58:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.550 00:58:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.550 00:58:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.550 00:58:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.550 00:58:20 -- paths/export.sh@5 -- # export PATH 00:24:08.551 00:58:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.551 00:58:20 -- nvmf/common.sh@46 -- # : 0 00:24:08.551 00:58:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.551 00:58:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.551 00:58:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.551 00:58:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.551 00:58:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.551 00:58:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.551 00:58:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.551 00:58:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.551 00:58:20 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.551 00:58:20 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.551 00:58:20 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.551 00:58:20 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:08.551 00:58:20 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.551 00:58:20 -- host/timeout.sh@19 -- # nvmftestinit 00:24:08.551 00:58:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:08.551 00:58:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.551 00:58:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.551 00:58:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.551 00:58:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.551 00:58:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.551 00:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.551 00:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.551 00:58:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:08.551 00:58:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:08.551 00:58:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:08.551 00:58:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:08.551 00:58:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:08.551 00:58:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:08.551 00:58:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.551 00:58:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.551 00:58:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:08.551 00:58:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:08.551 00:58:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:08.551 00:58:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:08.551 00:58:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:08.551 00:58:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.551 00:58:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:08.551 00:58:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:08.551 00:58:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:08.551 00:58:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:08.551 00:58:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:08.551 00:58:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:08.551 Cannot find device "nvmf_tgt_br" 00:24:08.551 00:58:20 -- nvmf/common.sh@154 -- # true 00:24:08.551 00:58:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.551 Cannot find device "nvmf_tgt_br2" 00:24:08.551 00:58:20 -- nvmf/common.sh@155 -- # true 00:24:08.551 00:58:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:08.551 00:58:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:08.551 Cannot find device "nvmf_tgt_br" 00:24:08.551 00:58:20 -- nvmf/common.sh@157 -- # true 00:24:08.551 00:58:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:08.551 Cannot find device "nvmf_tgt_br2" 00:24:08.551 00:58:20 -- nvmf/common.sh@158 -- # true 00:24:08.551 00:58:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:08.551 00:58:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:08.551 00:58:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.551 00:58:21 -- nvmf/common.sh@161 -- # true 00:24:08.551 00:58:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:08.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.551 00:58:21 -- nvmf/common.sh@162 -- # true 00:24:08.551 00:58:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:08.551 00:58:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:08.551 00:58:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:08.810 00:58:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:08.810 00:58:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:08.810 00:58:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:08.810 00:58:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:08.810 00:58:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:08.810 00:58:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:08.810 00:58:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:08.810 00:58:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:08.810 00:58:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:08.810 00:58:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:08.810 00:58:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:08.810 00:58:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:08.810 00:58:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:08.810 00:58:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:08.810 00:58:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:08.810 00:58:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:08.810 00:58:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:08.810 00:58:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:08.810 00:58:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:08.810 00:58:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:08.810 00:58:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:08.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:24:08.810 00:24:08.810 --- 10.0.0.2 ping statistics --- 00:24:08.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.811 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:08.811 00:58:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:08.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:08.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:08.811 00:24:08.811 --- 10.0.0.3 ping statistics --- 00:24:08.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.811 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:08.811 00:58:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:08.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:08.811 00:24:08.811 --- 10.0.0.1 ping statistics --- 00:24:08.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.811 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:08.811 00:58:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.811 00:58:21 -- nvmf/common.sh@421 -- # return 0 00:24:08.811 00:58:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:08.811 00:58:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.811 00:58:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:08.811 00:58:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:08.811 00:58:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.811 00:58:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:08.811 00:58:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:08.811 00:58:21 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:08.811 00:58:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:08.811 00:58:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.811 00:58:21 -- common/autotest_common.sh@10 -- # set +x 00:24:08.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.811 00:58:21 -- nvmf/common.sh@469 -- # nvmfpid=100425 00:24:08.811 00:58:21 -- nvmf/common.sh@470 -- # waitforlisten 100425 00:24:08.811 00:58:21 -- common/autotest_common.sh@829 -- # '[' -z 100425 ']' 00:24:08.811 00:58:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:08.811 00:58:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.811 00:58:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.811 00:58:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.811 00:58:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.811 00:58:21 -- common/autotest_common.sh@10 -- # set +x 00:24:08.811 [2024-12-03 00:58:21.302965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:08.811 [2024-12-03 00:58:21.303054] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.070 [2024-12-03 00:58:21.443636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:09.070 [2024-12-03 00:58:21.502748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:09.070 [2024-12-03 00:58:21.502899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.070 [2024-12-03 00:58:21.502911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.070 [2024-12-03 00:58:21.502919] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.070 [2024-12-03 00:58:21.503435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.070 [2024-12-03 00:58:21.503454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.004 00:58:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.004 00:58:22 -- common/autotest_common.sh@862 -- # return 0 00:24:10.004 00:58:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:10.004 00:58:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.004 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:24:10.004 00:58:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.004 00:58:22 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.004 00:58:22 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.262 [2024-12-03 00:58:22.585321] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.262 00:58:22 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.520 Malloc0 00:24:10.520 00:58:22 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.779 00:58:23 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.037 00:58:23 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.038 [2024-12-03 00:58:23.524363] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.038 00:58:23 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:11.038 00:58:23 -- host/timeout.sh@32 -- # bdevperf_pid=100516 00:24:11.038 00:58:23 -- host/timeout.sh@34 -- # waitforlisten 100516 /var/tmp/bdevperf.sock 00:24:11.038 00:58:23 -- common/autotest_common.sh@829 -- # '[' -z 100516 ']' 00:24:11.038 00:58:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.038 00:58:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.038 00:58:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.038 00:58:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.038 00:58:23 -- common/autotest_common.sh@10 -- # set +x 00:24:11.296 [2024-12-03 00:58:23.583138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:11.296 [2024-12-03 00:58:23.583220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100516 ] 00:24:11.296 [2024-12-03 00:58:23.718581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.296 [2024-12-03 00:58:23.800008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.232 00:58:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.232 00:58:24 -- common/autotest_common.sh@862 -- # return 0 00:24:12.232 00:58:24 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:12.232 00:58:24 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:12.490 NVMe0n1 00:24:12.490 00:58:24 -- host/timeout.sh@51 -- # rpc_pid=100558 00:24:12.490 00:58:24 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.490 00:58:24 -- host/timeout.sh@53 -- # sleep 1 00:24:12.748 Running I/O for 10 seconds... 00:24:13.680 00:58:26 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.939 [2024-12-03 00:58:26.247038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.939 [2024-12-03 00:58:26.247126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.939 [2024-12-03 00:58:26.247138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459490 is same with the state(5) to be set 00:24:13.940 [2024-12-03 00:58:26.247927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.247966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.247987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.247997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.940 [2024-12-03 00:58:26.248199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.940 [2024-12-03 00:58:26.248211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.941 [2024-12-03 00:58:26.248695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.941 [2024-12-03 00:58:26.248712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.941 [2024-12-03 00:58:26.248730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.941 [2024-12-03 00:58:26.248747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.941 [2024-12-03 00:58:26.248920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.941 [2024-12-03 00:58:26.248930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.248938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.248956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.248965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.248974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.248982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.248991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.248998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.942 [2024-12-03 00:58:26.249514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.942 [2024-12-03 00:58:26.249653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.942 [2024-12-03 00:58:26.249661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.249695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.249712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.249745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.249811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.249881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.249983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.249991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.250044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.250061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.250078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.943 [2024-12-03 00:58:26.250095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.943 [2024-12-03 00:58:26.250271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ff780 is same with the state(5) to be set 00:24:13.943 [2024-12-03 00:58:26.250292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.943 [2024-12-03 00:58:26.250298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.943 [2024-12-03 00:58:26.250306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130344 len:8 PRP1 0x0 PRP2 0x0 00:24:13.943 [2024-12-03 00:58:26.250314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.943 [2024-12-03 00:58:26.250374] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ff780 was disconnected and freed. reset controller. 00:24:13.943 [2024-12-03 00:58:26.250602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.943 [2024-12-03 00:58:26.250692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a8c0 (9): Bad file descriptor 00:24:13.943 [2024-12-03 00:58:26.250803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.943 [2024-12-03 00:58:26.250854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.943 [2024-12-03 00:58:26.250870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a8c0 with addr=10.0.0.2, port=4420 00:24:13.943 [2024-12-03 00:58:26.250879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a8c0 is same with the state(5) to be set 00:24:13.943 [2024-12-03 00:58:26.250896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a8c0 (9): Bad file descriptor 00:24:13.943 [2024-12-03 00:58:26.250909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:13.943 [2024-12-03 00:58:26.250918] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:13.943 [2024-12-03 00:58:26.250927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.944 [2024-12-03 00:58:26.250952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:13.944 [2024-12-03 00:58:26.250962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.944 00:58:26 -- host/timeout.sh@56 -- # sleep 2 00:24:15.845 [2024-12-03 00:58:28.251021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.845 [2024-12-03 00:58:28.251083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.845 [2024-12-03 00:58:28.251100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a8c0 with addr=10.0.0.2, port=4420 00:24:15.845 [2024-12-03 00:58:28.251110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a8c0 is same with the state(5) to be set 00:24:15.845 [2024-12-03 00:58:28.251126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a8c0 (9): Bad file descriptor 00:24:15.845 [2024-12-03 00:58:28.251149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.845 [2024-12-03 00:58:28.251160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.845 [2024-12-03 00:58:28.251167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.845 [2024-12-03 00:58:28.251184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.845 [2024-12-03 00:58:28.251193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.845 00:58:28 -- host/timeout.sh@57 -- # get_controller 00:24:15.845 00:58:28 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.845 00:58:28 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:16.104 00:58:28 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:16.104 00:58:28 -- host/timeout.sh@58 -- # get_bdev 00:24:16.104 00:58:28 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:16.104 00:58:28 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:16.362 00:58:28 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:16.362 00:58:28 -- host/timeout.sh@61 -- # sleep 5 00:24:18.319 [2024-12-03 00:58:30.251285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.319 [2024-12-03 00:58:30.251384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.319 [2024-12-03 00:58:30.251401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a8c0 with addr=10.0.0.2, port=4420 00:24:18.319 [2024-12-03 00:58:30.251429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a8c0 is same with the state(5) to be set 00:24:18.319 [2024-12-03 00:58:30.251450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a8c0 (9): Bad file descriptor 00:24:18.319 [2024-12-03 00:58:30.251467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.319 [2024-12-03 00:58:30.251475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.319 [2024-12-03 00:58:30.251484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.319 [2024-12-03 00:58:30.251509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.319 [2024-12-03 00:58:30.251519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.217 [2024-12-03 00:58:32.251536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.217 [2024-12-03 00:58:32.251569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:20.217 [2024-12-03 00:58:32.251589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:20.217 [2024-12-03 00:58:32.251597] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:20.217 [2024-12-03 00:58:32.251614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:20.782 00:24:20.782 Latency(us) 00:24:20.782 [2024-12-03T00:58:33.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.782 [2024-12-03T00:58:33.297Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.782 Verification LBA range: start 0x0 length 0x4000 00:24:20.782 NVMe0n1 : 8.13 1995.07 7.79 15.74 0.00 63574.03 2666.12 7015926.69 00:24:20.782 [2024-12-03T00:58:33.297Z] =================================================================================================================== 00:24:20.782 [2024-12-03T00:58:33.297Z] Total : 1995.07 7.79 15.74 0.00 63574.03 2666.12 7015926.69 00:24:20.782 0 00:24:21.348 00:58:33 -- host/timeout.sh@62 -- # get_controller 00:24:21.348 00:58:33 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.348 00:58:33 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:21.606 00:58:34 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:21.606 00:58:34 -- host/timeout.sh@63 -- # get_bdev 00:24:21.606 00:58:34 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:21.606 00:58:34 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:21.863 00:58:34 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:21.863 00:58:34 -- host/timeout.sh@65 -- # wait 100558 00:24:21.863 00:58:34 -- host/timeout.sh@67 -- # killprocess 100516 00:24:21.863 00:58:34 -- common/autotest_common.sh@936 -- # '[' -z 100516 ']' 00:24:21.863 00:58:34 -- common/autotest_common.sh@940 -- # kill -0 100516 00:24:21.863 00:58:34 -- common/autotest_common.sh@941 -- # uname 00:24:21.863 00:58:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:21.863 00:58:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100516 00:24:21.863 killing process with pid 100516 00:24:21.863 Received shutdown signal, test time was about 9.227242 seconds 00:24:21.863 00:24:21.863 Latency(us) 00:24:21.863 [2024-12-03T00:58:34.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.863 [2024-12-03T00:58:34.378Z] =================================================================================================================== 00:24:21.863 [2024-12-03T00:58:34.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.863 00:58:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:21.863 00:58:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:21.863 00:58:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100516' 00:24:21.863 00:58:34 -- common/autotest_common.sh@955 -- # kill 100516 00:24:21.863 00:58:34 -- common/autotest_common.sh@960 -- # wait 100516 00:24:22.122 00:58:34 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.380 [2024-12-03 00:58:34.780898] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.380 00:58:34 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:22.380 00:58:34 -- host/timeout.sh@74 -- # bdevperf_pid=100717 00:24:22.380 00:58:34 -- host/timeout.sh@76 -- # waitforlisten 100717 /var/tmp/bdevperf.sock 00:24:22.380 00:58:34 -- common/autotest_common.sh@829 -- # '[' -z 100717 ']' 00:24:22.380 00:58:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.380 00:58:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.380 00:58:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.380 00:58:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.380 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:24:22.380 [2024-12-03 00:58:34.835455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:22.380 [2024-12-03 00:58:34.835541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100717 ] 00:24:22.638 [2024-12-03 00:58:34.959793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.638 [2024-12-03 00:58:35.027820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.570 00:58:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.570 00:58:35 -- common/autotest_common.sh@862 -- # return 0 00:24:23.570 00:58:35 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:23.570 00:58:35 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:23.832 NVMe0n1 00:24:23.832 00:58:36 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.832 00:58:36 -- host/timeout.sh@84 -- # rpc_pid=100759 00:24:23.832 00:58:36 -- host/timeout.sh@86 -- # sleep 1 00:24:23.832 Running I/O for 10 seconds... 00:24:24.826 00:58:37 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.086 [2024-12-03 00:58:37.459344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.086 [2024-12-03 00:58:37.459600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.459994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25feca0 is same with the state(5) to be set 00:24:25.087 [2024-12-03 00:58:37.460321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.087 [2024-12-03 00:58:37.460610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.087 [2024-12-03 00:58:37.460617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.460984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.460992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.088 [2024-12-03 00:58:37.461270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.088 [2024-12-03 00:58:37.461279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.088 [2024-12-03 00:58:37.461286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.089 [2024-12-03 00:58:37.461864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.089 [2024-12-03 00:58:37.461902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.089 [2024-12-03 00:58:37.461911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.461919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.461929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.461936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.461945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.461957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.461967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.461975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.461985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.461993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.090 [2024-12-03 00:58:37.462476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.090 [2024-12-03 00:58:37.462603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.090 [2024-12-03 00:58:37.462611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67660 is same with the state(5) to be set 00:24:25.090 [2024-12-03 00:58:37.462620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:25.091 [2024-12-03 00:58:37.462627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:25.091 [2024-12-03 00:58:37.462633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126664 len:8 PRP1 0x0 PRP2 0x0 00:24:25.091 [2024-12-03 00:58:37.462641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.091 [2024-12-03 00:58:37.462697] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc67660 was disconnected and freed. reset controller. 00:24:25.091 [2024-12-03 00:58:37.462892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.091 [2024-12-03 00:58:37.462958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:25.091 [2024-12-03 00:58:37.463032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.091 [2024-12-03 00:58:37.463074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.091 [2024-12-03 00:58:37.463088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe28c0 with addr=10.0.0.2, port=4420 00:24:25.091 [2024-12-03 00:58:37.463098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe28c0 is same with the state(5) to be set 00:24:25.091 [2024-12-03 00:58:37.463112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:25.091 [2024-12-03 00:58:37.463133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.091 [2024-12-03 00:58:37.463141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.091 [2024-12-03 00:58:37.463149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.091 [2024-12-03 00:58:37.463171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.091 [2024-12-03 00:58:37.463180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.091 00:58:37 -- host/timeout.sh@90 -- # sleep 1 00:24:26.025 [2024-12-03 00:58:38.463242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.025 [2024-12-03 00:58:38.463306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.025 [2024-12-03 00:58:38.463323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe28c0 with addr=10.0.0.2, port=4420 00:24:26.025 [2024-12-03 00:58:38.463332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe28c0 is same with the state(5) to be set 00:24:26.025 [2024-12-03 00:58:38.463349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:26.025 [2024-12-03 00:58:38.463363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.025 [2024-12-03 00:58:38.463371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.025 [2024-12-03 00:58:38.463378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.025 [2024-12-03 00:58:38.463394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.025 [2024-12-03 00:58:38.463404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.025 00:58:38 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.282 [2024-12-03 00:58:38.717182] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.282 00:58:38 -- host/timeout.sh@92 -- # wait 100759 00:24:27.215 [2024-12-03 00:58:39.477687] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:35.331 00:24:35.331 Latency(us) 00:24:35.331 [2024-12-03T00:58:47.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.331 [2024-12-03T00:58:47.846Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:35.331 Verification LBA range: start 0x0 length 0x4000 00:24:35.331 NVMe0n1 : 10.00 10411.77 40.67 0.00 0.00 12274.57 621.85 3019898.88 00:24:35.331 [2024-12-03T00:58:47.846Z] =================================================================================================================== 00:24:35.331 [2024-12-03T00:58:47.846Z] Total : 10411.77 40.67 0.00 0.00 12274.57 621.85 3019898.88 00:24:35.331 0 00:24:35.331 00:58:46 -- host/timeout.sh@97 -- # rpc_pid=100881 00:24:35.331 00:58:46 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.331 00:58:46 -- host/timeout.sh@98 -- # sleep 1 00:24:35.331 Running I/O for 10 seconds... 00:24:35.331 00:58:47 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.331 [2024-12-03 00:58:47.583858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.583993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a110 is same with the state(5) to be set 00:24:35.331 [2024-12-03 00:58:47.584775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.584985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.584994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.332 [2024-12-03 00:58:47.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.332 [2024-12-03 00:58:47.585533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.585940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.585990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.585998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.586006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.586023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.333 [2024-12-03 00:58:47.586199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.586216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.333 [2024-12-03 00:58:47.586225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.333 [2024-12-03 00:58:47.586233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.334 [2024-12-03 00:58:47.586917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.334 [2024-12-03 00:58:47.586971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.334 [2024-12-03 00:58:47.586980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.586987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.586997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.335 [2024-12-03 00:58:47.587107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc331d0 is same with the state(5) to be set 00:24:35.335 [2024-12-03 00:58:47.587126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.335 [2024-12-03 00:58:47.587132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.335 [2024-12-03 00:58:47.587139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127840 len:8 PRP1 0x0 PRP2 0x0 00:24:35.335 [2024-12-03 00:58:47.587147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.335 [2024-12-03 00:58:47.587203] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc331d0 was disconnected and freed. reset controller. 00:24:35.335 [2024-12-03 00:58:47.587384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 [2024-12-03 00:58:47.587461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:35.335 [2024-12-03 00:58:47.587577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-12-03 00:58:47.587621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.335 [2024-12-03 00:58:47.587637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe28c0 with addr=10.0.0.2, port=4420 00:24:35.335 [2024-12-03 00:58:47.587646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe28c0 is same with the state(5) to be set 00:24:35.335 [2024-12-03 00:58:47.587661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:35.335 [2024-12-03 00:58:47.587675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.335 [2024-12-03 00:58:47.587690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.335 [2024-12-03 00:58:47.587700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.335 [2024-12-03 00:58:47.587717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.335 [2024-12-03 00:58:47.587727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.335 00:58:47 -- host/timeout.sh@101 -- # sleep 3 00:24:36.270 [2024-12-03 00:58:48.587788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.270 [2024-12-03 00:58:48.587859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.270 [2024-12-03 00:58:48.587875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe28c0 with addr=10.0.0.2, port=4420 00:24:36.270 [2024-12-03 00:58:48.587885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe28c0 is same with the state(5) to be set 00:24:36.270 [2024-12-03 00:58:48.587901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:36.270 [2024-12-03 00:58:48.587916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.270 [2024-12-03 00:58:48.587924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.270 [2024-12-03 00:58:48.587932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.270 [2024-12-03 00:58:48.587949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.271 [2024-12-03 00:58:48.587959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.204 [2024-12-03 00:58:49.588022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.204 [2024-12-03 00:58:49.588083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.204 [2024-12-03 00:58:49.588100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe28c0 with addr=10.0.0.2, port=4420 00:24:37.204 [2024-12-03 00:58:49.588111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe28c0 is same with the state(5) to be set 00:24:37.204 [2024-12-03 00:58:49.588128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:37.204 [2024-12-03 00:58:49.588143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.204 [2024-12-03 00:58:49.588151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.204 [2024-12-03 00:58:49.588159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.204 [2024-12-03 00:58:49.588176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.204 [2024-12-03 00:58:49.588186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-12-03 00:58:50.589906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-12-03 00:58:50.589973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-12-03 00:58:50.589989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe28c0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-12-03 00:58:50.589999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe28c0 is same with the state(5) to be set 00:24:38.157 [2024-12-03 00:58:50.590148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe28c0 (9): Bad file descriptor 00:24:38.157 [2024-12-03 00:58:50.590271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-12-03 00:58:50.590284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-12-03 00:58:50.590292] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-12-03 00:58:50.592217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-12-03 00:58:50.592239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 00:58:50 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.415 [2024-12-03 00:58:50.850639] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.415 00:58:50 -- host/timeout.sh@103 -- # wait 100881 00:24:39.350 [2024-12-03 00:58:51.610943] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:44.615 00:24:44.615 Latency(us) 00:24:44.615 [2024-12-03T00:58:57.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.615 [2024-12-03T00:58:57.130Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.615 Verification LBA range: start 0x0 length 0x4000 00:24:44.615 NVMe0n1 : 10.01 8873.06 34.66 7426.29 0.00 7841.36 700.04 3019898.88 00:24:44.615 [2024-12-03T00:58:57.130Z] =================================================================================================================== 00:24:44.615 [2024-12-03T00:58:57.130Z] Total : 8873.06 34.66 7426.29 0.00 7841.36 0.00 3019898.88 00:24:44.615 0 00:24:44.615 00:58:56 -- host/timeout.sh@105 -- # killprocess 100717 00:24:44.615 00:58:56 -- common/autotest_common.sh@936 -- # '[' -z 100717 ']' 00:24:44.615 00:58:56 -- common/autotest_common.sh@940 -- # kill -0 100717 00:24:44.615 00:58:56 -- common/autotest_common.sh@941 -- # uname 00:24:44.615 00:58:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:44.615 00:58:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100717 00:24:44.615 killing process with pid 100717 00:24:44.615 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.615 00:24:44.615 Latency(us) 00:24:44.615 [2024-12-03T00:58:57.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.615 [2024-12-03T00:58:57.130Z] =================================================================================================================== 00:24:44.615 [2024-12-03T00:58:57.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.615 00:58:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:44.615 00:58:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:44.615 00:58:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100717' 00:24:44.615 00:58:56 -- common/autotest_common.sh@955 -- # kill 100717 00:24:44.615 00:58:56 -- common/autotest_common.sh@960 -- # wait 100717 00:24:44.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.615 00:58:56 -- host/timeout.sh@110 -- # bdevperf_pid=101008 00:24:44.615 00:58:56 -- host/timeout.sh@112 -- # waitforlisten 101008 /var/tmp/bdevperf.sock 00:24:44.615 00:58:56 -- common/autotest_common.sh@829 -- # '[' -z 101008 ']' 00:24:44.615 00:58:56 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:44.615 00:58:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.615 00:58:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.615 00:58:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.615 00:58:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.615 00:58:56 -- common/autotest_common.sh@10 -- # set +x 00:24:44.615 [2024-12-03 00:58:56.855884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:44.615 [2024-12-03 00:58:56.855972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101008 ] 00:24:44.615 [2024-12-03 00:58:56.996950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.615 [2024-12-03 00:58:57.056002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.550 00:58:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.550 00:58:57 -- common/autotest_common.sh@862 -- # return 0 00:24:45.550 00:58:57 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 101008 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:45.550 00:58:57 -- host/timeout.sh@116 -- # dtrace_pid=101033 00:24:45.550 00:58:57 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:45.550 00:58:58 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:45.808 NVMe0n1 00:24:46.066 00:58:58 -- host/timeout.sh@124 -- # rpc_pid=101089 00:24:46.066 00:58:58 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.066 00:58:58 -- host/timeout.sh@125 -- # sleep 1 00:24:46.066 Running I/O for 10 seconds... 00:24:47.015 00:58:59 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.275 [2024-12-03 00:58:59.533972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.275 [2024-12-03 00:58:59.534156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.276 [2024-12-03 00:58:59.534909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.534995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245dba0 is same with the state(5) to be set 00:24:47.277 [2024-12-03 00:58:59.535359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.277 [2024-12-03 00:58:59.535914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.277 [2024-12-03 00:58:59.535923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.535931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.535939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.535947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.535956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.535966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.535977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.535985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.535994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.278 [2024-12-03 00:58:59.536594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.278 [2024-12-03 00:58:59.536601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.536982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.536991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.279 [2024-12-03 00:58:59.537288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.279 [2024-12-03 00:58:59.537295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.280 [2024-12-03 00:58:59.537648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfa780 is same with the state(5) to be set 00:24:47.280 [2024-12-03 00:58:59.537666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.280 [2024-12-03 00:58:59.537672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.280 [2024-12-03 00:58:59.537679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46392 len:8 PRP1 0x0 PRP2 0x0 00:24:47.280 [2024-12-03 00:58:59.537687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.280 [2024-12-03 00:58:59.537733] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cfa780 was disconnected and freed. reset controller. 00:24:47.280 [2024-12-03 00:58:59.537957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.280 [2024-12-03 00:58:59.538030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c758c0 (9): Bad file descriptor 00:24:47.280 [2024-12-03 00:58:59.538111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.280 [2024-12-03 00:58:59.538155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.280 [2024-12-03 00:58:59.538170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c758c0 with addr=10.0.0.2, port=4420 00:24:47.280 [2024-12-03 00:58:59.538179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c758c0 is same with the state(5) to be set 00:24:47.280 [2024-12-03 00:58:59.538194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c758c0 (9): Bad file descriptor 00:24:47.280 [2024-12-03 00:58:59.538207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.280 [2024-12-03 00:58:59.538216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.280 [2024-12-03 00:58:59.538224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.280 [2024-12-03 00:58:59.538241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.280 [2024-12-03 00:58:59.538251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.280 00:58:59 -- host/timeout.sh@128 -- # wait 101089 00:24:49.179 [2024-12-03 00:59:01.538365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-12-03 00:59:01.538436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.179 [2024-12-03 00:59:01.538454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c758c0 with addr=10.0.0.2, port=4420 00:24:49.179 [2024-12-03 00:59:01.538465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c758c0 is same with the state(5) to be set 00:24:49.179 [2024-12-03 00:59:01.538490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c758c0 (9): Bad file descriptor 00:24:49.179 [2024-12-03 00:59:01.538506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.179 [2024-12-03 00:59:01.538515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.179 [2024-12-03 00:59:01.538523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.179 [2024-12-03 00:59:01.538540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.179 [2024-12-03 00:59:01.538557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.080 [2024-12-03 00:59:03.538640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.080 [2024-12-03 00:59:03.538710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.080 [2024-12-03 00:59:03.538727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c758c0 with addr=10.0.0.2, port=4420 00:24:51.080 [2024-12-03 00:59:03.538737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c758c0 is same with the state(5) to be set 00:24:51.080 [2024-12-03 00:59:03.538753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c758c0 (9): Bad file descriptor 00:24:51.080 [2024-12-03 00:59:03.538767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.080 [2024-12-03 00:59:03.538776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.080 [2024-12-03 00:59:03.538784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.080 [2024-12-03 00:59:03.538800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.080 [2024-12-03 00:59:03.538810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.609 [2024-12-03 00:59:05.538840] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.609 [2024-12-03 00:59:05.538884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.609 [2024-12-03 00:59:05.538903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.609 [2024-12-03 00:59:05.538911] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:53.609 [2024-12-03 00:59:05.538927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.177 00:24:54.177 Latency(us) 00:24:54.177 [2024-12-03T00:59:06.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.177 [2024-12-03T00:59:06.692Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:54.177 NVMe0n1 : 8.13 2972.50 11.61 15.75 0.00 42806.76 1861.82 7015926.69 00:24:54.177 [2024-12-03T00:59:06.692Z] =================================================================================================================== 00:24:54.177 [2024-12-03T00:59:06.692Z] Total : 2972.50 11.61 15.75 0.00 42806.76 1861.82 7015926.69 00:24:54.177 0 00:24:54.177 00:59:06 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:54.177 Attaching 5 probes... 00:24:54.177 1198.234082: reset bdev controller NVMe0 00:24:54.177 1198.350353: reconnect bdev controller NVMe0 00:24:54.177 3198.589513: reconnect delay bdev controller NVMe0 00:24:54.177 3198.602690: reconnect bdev controller NVMe0 00:24:54.177 5198.858760: reconnect delay bdev controller NVMe0 00:24:54.177 5198.871181: reconnect bdev controller NVMe0 00:24:54.177 7199.112632: reconnect delay bdev controller NVMe0 00:24:54.177 7199.125334: reconnect bdev controller NVMe0 00:24:54.177 00:59:06 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:54.177 00:59:06 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:54.177 00:59:06 -- host/timeout.sh@136 -- # kill 101033 00:24:54.177 00:59:06 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:54.177 00:59:06 -- host/timeout.sh@139 -- # killprocess 101008 00:24:54.177 00:59:06 -- common/autotest_common.sh@936 -- # '[' -z 101008 ']' 00:24:54.177 00:59:06 -- common/autotest_common.sh@940 -- # kill -0 101008 00:24:54.177 00:59:06 -- common/autotest_common.sh@941 -- # uname 00:24:54.177 00:59:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.177 00:59:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101008 00:24:54.177 killing process with pid 101008 00:24:54.177 Received shutdown signal, test time was about 8.189748 seconds 00:24:54.177 00:24:54.177 Latency(us) 00:24:54.177 [2024-12-03T00:59:06.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.177 [2024-12-03T00:59:06.692Z] =================================================================================================================== 00:24:54.177 [2024-12-03T00:59:06.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.177 00:59:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:54.177 00:59:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:54.177 00:59:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101008' 00:24:54.177 00:59:06 -- common/autotest_common.sh@955 -- # kill 101008 00:24:54.177 00:59:06 -- common/autotest_common.sh@960 -- # wait 101008 00:24:54.436 00:59:06 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.694 00:59:07 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:54.694 00:59:07 -- host/timeout.sh@145 -- # nvmftestfini 00:24:54.694 00:59:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:54.694 00:59:07 -- nvmf/common.sh@116 -- # sync 00:24:54.694 00:59:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:54.694 00:59:07 -- nvmf/common.sh@119 -- # set +e 00:24:54.694 00:59:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:54.694 00:59:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:54.694 rmmod nvme_tcp 00:24:54.694 rmmod nvme_fabrics 00:24:54.694 rmmod nvme_keyring 00:24:54.694 00:59:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:54.694 00:59:07 -- nvmf/common.sh@123 -- # set -e 00:24:54.694 00:59:07 -- nvmf/common.sh@124 -- # return 0 00:24:54.694 00:59:07 -- nvmf/common.sh@477 -- # '[' -n 100425 ']' 00:24:54.694 00:59:07 -- nvmf/common.sh@478 -- # killprocess 100425 00:24:54.694 00:59:07 -- common/autotest_common.sh@936 -- # '[' -z 100425 ']' 00:24:54.694 00:59:07 -- common/autotest_common.sh@940 -- # kill -0 100425 00:24:54.694 00:59:07 -- common/autotest_common.sh@941 -- # uname 00:24:54.694 00:59:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.694 00:59:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100425 00:24:54.694 killing process with pid 100425 00:24:54.694 00:59:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:54.694 00:59:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:54.694 00:59:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100425' 00:24:54.694 00:59:07 -- common/autotest_common.sh@955 -- # kill 100425 00:24:54.694 00:59:07 -- common/autotest_common.sh@960 -- # wait 100425 00:24:54.952 00:59:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:54.952 00:59:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:54.952 00:59:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:54.952 00:59:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.952 00:59:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:54.952 00:59:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.952 00:59:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.952 00:59:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.952 00:59:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:54.952 00:24:54.952 real 0m46.724s 00:24:54.952 user 2m16.660s 00:24:54.952 sys 0m5.225s 00:24:54.952 00:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:54.952 ************************************ 00:24:54.952 END TEST nvmf_timeout 00:24:54.952 ************************************ 00:24:54.952 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:54.952 00:59:07 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:54.952 00:59:07 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:54.952 00:59:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.952 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:55.212 00:59:07 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:55.212 00:24:55.212 real 17m29.378s 00:24:55.212 user 55m43.207s 00:24:55.212 sys 3m41.926s 00:24:55.212 00:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:55.212 ************************************ 00:24:55.212 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:55.212 END TEST nvmf_tcp 00:24:55.212 ************************************ 00:24:55.212 00:59:07 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:24:55.212 00:59:07 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:55.212 00:59:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:55.212 00:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.212 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:55.212 ************************************ 00:24:55.212 START TEST spdkcli_nvmf_tcp 00:24:55.212 ************************************ 00:24:55.212 00:59:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:55.212 * Looking for test storage... 00:24:55.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:55.212 00:59:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:55.212 00:59:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:55.212 00:59:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:55.472 00:59:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:55.472 00:59:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:55.472 00:59:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:55.472 00:59:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:55.472 00:59:07 -- scripts/common.sh@335 -- # IFS=.-: 00:24:55.472 00:59:07 -- scripts/common.sh@335 -- # read -ra ver1 00:24:55.472 00:59:07 -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.472 00:59:07 -- scripts/common.sh@336 -- # read -ra ver2 00:24:55.472 00:59:07 -- scripts/common.sh@337 -- # local 'op=<' 00:24:55.472 00:59:07 -- scripts/common.sh@339 -- # ver1_l=2 00:24:55.472 00:59:07 -- scripts/common.sh@340 -- # ver2_l=1 00:24:55.472 00:59:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:55.472 00:59:07 -- scripts/common.sh@343 -- # case "$op" in 00:24:55.472 00:59:07 -- scripts/common.sh@344 -- # : 1 00:24:55.472 00:59:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:55.472 00:59:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.472 00:59:07 -- scripts/common.sh@364 -- # decimal 1 00:24:55.472 00:59:07 -- scripts/common.sh@352 -- # local d=1 00:24:55.472 00:59:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.472 00:59:07 -- scripts/common.sh@354 -- # echo 1 00:24:55.472 00:59:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:55.472 00:59:07 -- scripts/common.sh@365 -- # decimal 2 00:24:55.472 00:59:07 -- scripts/common.sh@352 -- # local d=2 00:24:55.472 00:59:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.472 00:59:07 -- scripts/common.sh@354 -- # echo 2 00:24:55.472 00:59:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:55.472 00:59:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:55.472 00:59:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:55.472 00:59:07 -- scripts/common.sh@367 -- # return 0 00:24:55.472 00:59:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.472 00:59:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.472 --rc genhtml_branch_coverage=1 00:24:55.472 --rc genhtml_function_coverage=1 00:24:55.472 --rc genhtml_legend=1 00:24:55.472 --rc geninfo_all_blocks=1 00:24:55.472 --rc geninfo_unexecuted_blocks=1 00:24:55.472 00:24:55.472 ' 00:24:55.472 00:59:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.472 --rc genhtml_branch_coverage=1 00:24:55.472 --rc genhtml_function_coverage=1 00:24:55.472 --rc genhtml_legend=1 00:24:55.472 --rc geninfo_all_blocks=1 00:24:55.472 --rc geninfo_unexecuted_blocks=1 00:24:55.472 00:24:55.472 ' 00:24:55.472 00:59:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.472 --rc genhtml_branch_coverage=1 00:24:55.472 --rc genhtml_function_coverage=1 00:24:55.472 --rc genhtml_legend=1 00:24:55.472 --rc geninfo_all_blocks=1 00:24:55.472 --rc geninfo_unexecuted_blocks=1 00:24:55.472 00:24:55.472 ' 00:24:55.472 00:59:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.472 --rc genhtml_branch_coverage=1 00:24:55.472 --rc genhtml_function_coverage=1 00:24:55.472 --rc genhtml_legend=1 00:24:55.472 --rc geninfo_all_blocks=1 00:24:55.472 --rc geninfo_unexecuted_blocks=1 00:24:55.472 00:24:55.472 ' 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:55.472 00:59:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:55.472 00:59:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.472 00:59:07 -- nvmf/common.sh@7 -- # uname -s 00:24:55.472 00:59:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.472 00:59:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.472 00:59:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.472 00:59:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.472 00:59:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.472 00:59:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.472 00:59:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.472 00:59:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.472 00:59:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.472 00:59:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.472 00:59:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:24:55.472 00:59:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:24:55.472 00:59:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.472 00:59:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.472 00:59:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.472 00:59:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.472 00:59:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.472 00:59:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.472 00:59:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.472 00:59:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.472 00:59:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.472 00:59:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.472 00:59:07 -- paths/export.sh@5 -- # export PATH 00:24:55.472 00:59:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.472 00:59:07 -- nvmf/common.sh@46 -- # : 0 00:24:55.472 00:59:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:55.472 00:59:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:55.472 00:59:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:55.472 00:59:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.472 00:59:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.472 00:59:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:55.472 00:59:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:55.472 00:59:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:55.472 00:59:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.472 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:55.472 00:59:07 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:55.472 00:59:07 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101317 00:24:55.472 00:59:07 -- spdkcli/common.sh@34 -- # waitforlisten 101317 00:24:55.472 00:59:07 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:55.472 00:59:07 -- common/autotest_common.sh@829 -- # '[' -z 101317 ']' 00:24:55.472 00:59:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.472 00:59:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.472 00:59:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.472 00:59:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.472 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:55.472 [2024-12-03 00:59:07.831374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:55.472 [2024-12-03 00:59:07.831487] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101317 ] 00:24:55.472 [2024-12-03 00:59:07.970380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:55.732 [2024-12-03 00:59:08.057758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.732 [2024-12-03 00:59:08.058075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.732 [2024-12-03 00:59:08.058086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.667 00:59:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.667 00:59:08 -- common/autotest_common.sh@862 -- # return 0 00:24:56.667 00:59:08 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:56.667 00:59:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.667 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:24:56.667 00:59:08 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:56.667 00:59:08 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:56.667 00:59:08 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:56.667 00:59:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.667 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:24:56.667 00:59:08 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:56.667 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:56.667 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:56.667 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:56.667 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:56.667 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:56.667 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:56.667 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:56.667 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:56.667 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:56.667 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:56.667 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.667 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:56.667 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:56.667 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:56.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:56.668 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:56.668 ' 00:24:56.926 [2024-12-03 00:59:09.363390] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:59.455 [2024-12-03 00:59:11.629061] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.440 [2024-12-03 00:59:12.914949] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:02.976 [2024-12-03 00:59:15.314076] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:04.881 [2024-12-03 00:59:17.373012] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:06.786 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:06.786 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:06.786 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:06.786 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:06.786 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:06.786 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:06.786 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:06.786 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.786 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.786 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:06.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:06.787 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.787 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:06.787 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:06.787 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:06.787 00:59:19 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:06.787 00:59:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.787 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:25:06.787 00:59:19 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:06.787 00:59:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.787 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:25:06.787 00:59:19 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:06.787 00:59:19 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:07.353 00:59:19 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:07.353 00:59:19 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:07.353 00:59:19 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:07.353 00:59:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.353 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:25:07.353 00:59:19 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:07.353 00:59:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.353 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:25:07.353 00:59:19 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:07.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:07.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:07.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:07.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:07.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:07.353 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:07.353 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:07.353 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:07.353 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:07.353 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:07.353 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:07.353 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:07.353 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:07.353 ' 00:25:12.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:12.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:12.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:12.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:12.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:12.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:12.623 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:12.623 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:12.623 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:12.623 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:12.623 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:12.623 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:12.623 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:12.623 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:12.881 00:59:25 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:12.881 00:59:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.881 00:59:25 -- common/autotest_common.sh@10 -- # set +x 00:25:12.881 00:59:25 -- spdkcli/nvmf.sh@90 -- # killprocess 101317 00:25:12.881 00:59:25 -- common/autotest_common.sh@936 -- # '[' -z 101317 ']' 00:25:12.881 00:59:25 -- common/autotest_common.sh@940 -- # kill -0 101317 00:25:12.881 00:59:25 -- common/autotest_common.sh@941 -- # uname 00:25:12.881 00:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.882 00:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101317 00:25:12.882 00:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:12.882 killing process with pid 101317 00:25:12.882 00:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:12.882 00:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101317' 00:25:12.882 00:59:25 -- common/autotest_common.sh@955 -- # kill 101317 00:25:12.882 [2024-12-03 00:59:25.267858] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:12.882 00:59:25 -- common/autotest_common.sh@960 -- # wait 101317 00:25:13.139 00:59:25 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:13.139 00:59:25 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:13.139 00:59:25 -- spdkcli/common.sh@13 -- # '[' -n 101317 ']' 00:25:13.139 00:59:25 -- spdkcli/common.sh@14 -- # killprocess 101317 00:25:13.139 00:59:25 -- common/autotest_common.sh@936 -- # '[' -z 101317 ']' 00:25:13.139 00:59:25 -- common/autotest_common.sh@940 -- # kill -0 101317 00:25:13.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101317) - No such process 00:25:13.139 Process with pid 101317 is not found 00:25:13.139 00:59:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101317 is not found' 00:25:13.139 00:59:25 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:13.139 00:59:25 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:13.139 00:59:25 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:13.139 00:25:13.140 real 0m17.902s 00:25:13.140 user 0m38.842s 00:25:13.140 sys 0m0.956s 00:25:13.140 00:59:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:13.140 00:59:25 -- common/autotest_common.sh@10 -- # set +x 00:25:13.140 ************************************ 00:25:13.140 END TEST spdkcli_nvmf_tcp 00:25:13.140 ************************************ 00:25:13.140 00:59:25 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:13.140 00:59:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:13.140 00:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:13.140 00:59:25 -- common/autotest_common.sh@10 -- # set +x 00:25:13.140 ************************************ 00:25:13.140 START TEST nvmf_identify_passthru 00:25:13.140 ************************************ 00:25:13.140 00:59:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:13.140 * Looking for test storage... 00:25:13.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:13.140 00:59:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:13.140 00:59:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:13.140 00:59:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:13.399 00:59:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:13.399 00:59:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:13.399 00:59:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:13.399 00:59:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:13.399 00:59:25 -- scripts/common.sh@335 -- # IFS=.-: 00:25:13.399 00:59:25 -- scripts/common.sh@335 -- # read -ra ver1 00:25:13.399 00:59:25 -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.399 00:59:25 -- scripts/common.sh@336 -- # read -ra ver2 00:25:13.399 00:59:25 -- scripts/common.sh@337 -- # local 'op=<' 00:25:13.399 00:59:25 -- scripts/common.sh@339 -- # ver1_l=2 00:25:13.399 00:59:25 -- scripts/common.sh@340 -- # ver2_l=1 00:25:13.399 00:59:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:13.399 00:59:25 -- scripts/common.sh@343 -- # case "$op" in 00:25:13.399 00:59:25 -- scripts/common.sh@344 -- # : 1 00:25:13.399 00:59:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:13.399 00:59:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.399 00:59:25 -- scripts/common.sh@364 -- # decimal 1 00:25:13.399 00:59:25 -- scripts/common.sh@352 -- # local d=1 00:25:13.399 00:59:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.399 00:59:25 -- scripts/common.sh@354 -- # echo 1 00:25:13.399 00:59:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:13.399 00:59:25 -- scripts/common.sh@365 -- # decimal 2 00:25:13.399 00:59:25 -- scripts/common.sh@352 -- # local d=2 00:25:13.399 00:59:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.399 00:59:25 -- scripts/common.sh@354 -- # echo 2 00:25:13.399 00:59:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:13.399 00:59:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:13.399 00:59:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:13.399 00:59:25 -- scripts/common.sh@367 -- # return 0 00:25:13.399 00:59:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.399 00:59:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.399 --rc genhtml_branch_coverage=1 00:25:13.399 --rc genhtml_function_coverage=1 00:25:13.399 --rc genhtml_legend=1 00:25:13.399 --rc geninfo_all_blocks=1 00:25:13.399 --rc geninfo_unexecuted_blocks=1 00:25:13.399 00:25:13.399 ' 00:25:13.399 00:59:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.399 --rc genhtml_branch_coverage=1 00:25:13.399 --rc genhtml_function_coverage=1 00:25:13.399 --rc genhtml_legend=1 00:25:13.399 --rc geninfo_all_blocks=1 00:25:13.399 --rc geninfo_unexecuted_blocks=1 00:25:13.399 00:25:13.399 ' 00:25:13.399 00:59:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.399 --rc genhtml_branch_coverage=1 00:25:13.399 --rc genhtml_function_coverage=1 00:25:13.399 --rc genhtml_legend=1 00:25:13.399 --rc geninfo_all_blocks=1 00:25:13.399 --rc geninfo_unexecuted_blocks=1 00:25:13.399 00:25:13.399 ' 00:25:13.399 00:59:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.399 --rc genhtml_branch_coverage=1 00:25:13.399 --rc genhtml_function_coverage=1 00:25:13.399 --rc genhtml_legend=1 00:25:13.399 --rc geninfo_all_blocks=1 00:25:13.399 --rc geninfo_unexecuted_blocks=1 00:25:13.399 00:25:13.399 ' 00:25:13.399 00:59:25 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.399 00:59:25 -- nvmf/common.sh@7 -- # uname -s 00:25:13.399 00:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.399 00:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.399 00:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.399 00:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.399 00:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.399 00:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.399 00:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.399 00:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.399 00:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.399 00:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.399 00:59:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:25:13.399 00:59:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:25:13.399 00:59:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.399 00:59:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.399 00:59:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.399 00:59:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.399 00:59:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.399 00:59:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.399 00:59:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.399 00:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.399 00:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.399 00:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.399 00:59:25 -- paths/export.sh@5 -- # export PATH 00:25:13.400 00:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.400 00:59:25 -- nvmf/common.sh@46 -- # : 0 00:25:13.400 00:59:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:13.400 00:59:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:13.400 00:59:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:13.400 00:59:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.400 00:59:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.400 00:59:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:13.400 00:59:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:13.400 00:59:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:13.400 00:59:25 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.400 00:59:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.400 00:59:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.400 00:59:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.400 00:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.400 00:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.400 00:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.400 00:59:25 -- paths/export.sh@5 -- # export PATH 00:25:13.400 00:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.400 00:59:25 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:13.400 00:59:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:13.400 00:59:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.400 00:59:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:13.400 00:59:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:13.400 00:59:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:13.400 00:59:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.400 00:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:13.400 00:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.400 00:59:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:13.400 00:59:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:13.400 00:59:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:13.400 00:59:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:13.400 00:59:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:13.400 00:59:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:13.400 00:59:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.400 00:59:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.400 00:59:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:13.400 00:59:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:13.400 00:59:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.400 00:59:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.400 00:59:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.400 00:59:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.400 00:59:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.400 00:59:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.400 00:59:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.400 00:59:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.400 00:59:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:13.400 00:59:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:13.400 Cannot find device "nvmf_tgt_br" 00:25:13.400 00:59:25 -- nvmf/common.sh@154 -- # true 00:25:13.400 00:59:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.400 Cannot find device "nvmf_tgt_br2" 00:25:13.400 00:59:25 -- nvmf/common.sh@155 -- # true 00:25:13.400 00:59:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:13.400 00:59:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:13.400 Cannot find device "nvmf_tgt_br" 00:25:13.400 00:59:25 -- nvmf/common.sh@157 -- # true 00:25:13.400 00:59:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:13.400 Cannot find device "nvmf_tgt_br2" 00:25:13.400 00:59:25 -- nvmf/common.sh@158 -- # true 00:25:13.400 00:59:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:13.400 00:59:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:13.400 00:59:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.400 00:59:25 -- nvmf/common.sh@161 -- # true 00:25:13.400 00:59:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.400 00:59:25 -- nvmf/common.sh@162 -- # true 00:25:13.400 00:59:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.400 00:59:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.400 00:59:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.400 00:59:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.400 00:59:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.660 00:59:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.660 00:59:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.660 00:59:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.660 00:59:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.660 00:59:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:13.660 00:59:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:13.660 00:59:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:13.660 00:59:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:13.660 00:59:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.660 00:59:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.660 00:59:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.660 00:59:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:13.660 00:59:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:13.660 00:59:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.660 00:59:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.660 00:59:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.660 00:59:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.660 00:59:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.660 00:59:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:13.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:25:13.660 00:25:13.660 --- 10.0.0.2 ping statistics --- 00:25:13.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.660 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:13.660 00:59:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:13.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:25:13.660 00:25:13.660 --- 10.0.0.3 ping statistics --- 00:25:13.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.660 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:13.660 00:59:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:13.660 00:25:13.660 --- 10.0.0.1 ping statistics --- 00:25:13.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.660 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:13.660 00:59:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.660 00:59:26 -- nvmf/common.sh@421 -- # return 0 00:25:13.660 00:59:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:13.660 00:59:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.660 00:59:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:13.660 00:59:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:13.660 00:59:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.660 00:59:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:13.660 00:59:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:13.660 00:59:26 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:13.660 00:59:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.660 00:59:26 -- common/autotest_common.sh@10 -- # set +x 00:25:13.660 00:59:26 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:13.660 00:59:26 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:13.660 00:59:26 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:13.660 00:59:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:13.660 00:59:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:13.660 00:59:26 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:13.660 00:59:26 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:13.660 00:59:26 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:13.660 00:59:26 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:13.660 00:59:26 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:13.660 00:59:26 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:13.660 00:59:26 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:13.660 00:59:26 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:13.660 00:59:26 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:13.660 00:59:26 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:13.660 00:59:26 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:13.660 00:59:26 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:13.660 00:59:26 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:13.920 00:59:26 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:13.920 00:59:26 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:13.920 00:59:26 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:13.920 00:59:26 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:14.180 00:59:26 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:14.180 00:59:26 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:14.180 00:59:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.180 00:59:26 -- common/autotest_common.sh@10 -- # set +x 00:25:14.180 00:59:26 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:14.180 00:59:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.180 00:59:26 -- common/autotest_common.sh@10 -- # set +x 00:25:14.180 00:59:26 -- target/identify_passthru.sh@31 -- # nvmfpid=101834 00:25:14.180 00:59:26 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:14.180 00:59:26 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.180 00:59:26 -- target/identify_passthru.sh@35 -- # waitforlisten 101834 00:25:14.180 00:59:26 -- common/autotest_common.sh@829 -- # '[' -z 101834 ']' 00:25:14.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.180 00:59:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.180 00:59:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.180 00:59:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.180 00:59:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.180 00:59:26 -- common/autotest_common.sh@10 -- # set +x 00:25:14.180 [2024-12-03 00:59:26.638206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:14.180 [2024-12-03 00:59:26.638321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.440 [2024-12-03 00:59:26.782746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.440 [2024-12-03 00:59:26.856856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.440 [2024-12-03 00:59:26.857358] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.440 [2024-12-03 00:59:26.857566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.440 [2024-12-03 00:59:26.857735] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.440 [2024-12-03 00:59:26.858023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.440 [2024-12-03 00:59:26.858168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.440 [2024-12-03 00:59:26.858256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.440 [2024-12-03 00:59:26.858256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.377 00:59:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.377 00:59:27 -- common/autotest_common.sh@862 -- # return 0 00:25:15.377 00:59:27 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:15.377 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.377 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.377 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.377 00:59:27 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:15.377 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.377 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.377 [2024-12-03 00:59:27.794145] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:15.377 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.377 00:59:27 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.377 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.377 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.377 [2024-12-03 00:59:27.805867] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.377 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.377 00:59:27 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:15.377 00:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.377 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.377 00:59:27 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:15.377 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.377 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.636 Nvme0n1 00:25:15.636 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.636 00:59:27 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:15.636 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.636 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.636 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.636 00:59:27 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:15.636 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.636 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.636 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.636 00:59:27 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.636 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.636 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.636 [2024-12-03 00:59:27.942865] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.636 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.636 00:59:27 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:15.636 00:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.636 00:59:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.636 [2024-12-03 00:59:27.950528] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:15.636 [ 00:25:15.636 { 00:25:15.636 "allow_any_host": true, 00:25:15.636 "hosts": [], 00:25:15.636 "listen_addresses": [], 00:25:15.636 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:15.636 "subtype": "Discovery" 00:25:15.636 }, 00:25:15.636 { 00:25:15.636 "allow_any_host": true, 00:25:15.636 "hosts": [], 00:25:15.636 "listen_addresses": [ 00:25:15.636 { 00:25:15.636 "adrfam": "IPv4", 00:25:15.636 "traddr": "10.0.0.2", 00:25:15.636 "transport": "TCP", 00:25:15.636 "trsvcid": "4420", 00:25:15.636 "trtype": "TCP" 00:25:15.636 } 00:25:15.636 ], 00:25:15.636 "max_cntlid": 65519, 00:25:15.636 "max_namespaces": 1, 00:25:15.636 "min_cntlid": 1, 00:25:15.636 "model_number": "SPDK bdev Controller", 00:25:15.636 "namespaces": [ 00:25:15.636 { 00:25:15.636 "bdev_name": "Nvme0n1", 00:25:15.636 "name": "Nvme0n1", 00:25:15.636 "nguid": "831529CCCF854133A05EAA6C0A3D5401", 00:25:15.636 "nsid": 1, 00:25:15.636 "uuid": "831529cc-cf85-4133-a05e-aa6c0a3d5401" 00:25:15.636 } 00:25:15.636 ], 00:25:15.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.636 "serial_number": "SPDK00000000000001", 00:25:15.636 "subtype": "NVMe" 00:25:15.636 } 00:25:15.636 ] 00:25:15.636 00:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.636 00:59:27 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:15.636 00:59:27 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:15.636 00:59:27 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:15.895 00:59:28 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:15.895 00:59:28 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:15.895 00:59:28 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:15.895 00:59:28 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:15.895 00:59:28 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:15.895 00:59:28 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:15.895 00:59:28 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:15.895 00:59:28 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.895 00:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.895 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:16.154 00:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.154 00:59:28 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:16.154 00:59:28 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:16.154 00:59:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:16.154 00:59:28 -- nvmf/common.sh@116 -- # sync 00:25:16.154 00:59:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:16.154 00:59:28 -- nvmf/common.sh@119 -- # set +e 00:25:16.154 00:59:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:16.154 00:59:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:16.154 rmmod nvme_tcp 00:25:16.154 rmmod nvme_fabrics 00:25:16.154 rmmod nvme_keyring 00:25:16.154 00:59:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.154 00:59:28 -- nvmf/common.sh@123 -- # set -e 00:25:16.155 00:59:28 -- nvmf/common.sh@124 -- # return 0 00:25:16.155 00:59:28 -- nvmf/common.sh@477 -- # '[' -n 101834 ']' 00:25:16.155 00:59:28 -- nvmf/common.sh@478 -- # killprocess 101834 00:25:16.155 00:59:28 -- common/autotest_common.sh@936 -- # '[' -z 101834 ']' 00:25:16.155 00:59:28 -- common/autotest_common.sh@940 -- # kill -0 101834 00:25:16.155 00:59:28 -- common/autotest_common.sh@941 -- # uname 00:25:16.155 00:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.155 00:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101834 00:25:16.155 00:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:16.155 00:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:16.155 killing process with pid 101834 00:25:16.155 00:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101834' 00:25:16.155 00:59:28 -- common/autotest_common.sh@955 -- # kill 101834 00:25:16.155 [2024-12-03 00:59:28.576677] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:16.155 00:59:28 -- common/autotest_common.sh@960 -- # wait 101834 00:25:16.414 00:59:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:16.414 00:59:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:16.414 00:59:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:16.414 00:59:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.414 00:59:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:16.414 00:59:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.414 00:59:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:16.414 00:59:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.414 00:59:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:16.414 ************************************ 00:25:16.414 END TEST nvmf_identify_passthru 00:25:16.414 ************************************ 00:25:16.414 00:25:16.414 real 0m3.313s 00:25:16.414 user 0m8.239s 00:25:16.414 sys 0m0.854s 00:25:16.414 00:59:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:16.414 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:16.414 00:59:28 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:16.414 00:59:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:16.414 00:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:16.414 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:16.414 ************************************ 00:25:16.414 START TEST nvmf_dif 00:25:16.414 ************************************ 00:25:16.414 00:59:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:16.674 * Looking for test storage... 00:25:16.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:16.674 00:59:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:16.674 00:59:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:16.674 00:59:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:16.674 00:59:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:16.674 00:59:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:16.674 00:59:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:16.674 00:59:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:16.674 00:59:29 -- scripts/common.sh@335 -- # IFS=.-: 00:25:16.674 00:59:29 -- scripts/common.sh@335 -- # read -ra ver1 00:25:16.674 00:59:29 -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.674 00:59:29 -- scripts/common.sh@336 -- # read -ra ver2 00:25:16.674 00:59:29 -- scripts/common.sh@337 -- # local 'op=<' 00:25:16.674 00:59:29 -- scripts/common.sh@339 -- # ver1_l=2 00:25:16.674 00:59:29 -- scripts/common.sh@340 -- # ver2_l=1 00:25:16.674 00:59:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:16.674 00:59:29 -- scripts/common.sh@343 -- # case "$op" in 00:25:16.674 00:59:29 -- scripts/common.sh@344 -- # : 1 00:25:16.674 00:59:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:16.674 00:59:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.674 00:59:29 -- scripts/common.sh@364 -- # decimal 1 00:25:16.674 00:59:29 -- scripts/common.sh@352 -- # local d=1 00:25:16.674 00:59:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.674 00:59:29 -- scripts/common.sh@354 -- # echo 1 00:25:16.674 00:59:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:16.674 00:59:29 -- scripts/common.sh@365 -- # decimal 2 00:25:16.674 00:59:29 -- scripts/common.sh@352 -- # local d=2 00:25:16.674 00:59:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.674 00:59:29 -- scripts/common.sh@354 -- # echo 2 00:25:16.674 00:59:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:16.674 00:59:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:16.674 00:59:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:16.674 00:59:29 -- scripts/common.sh@367 -- # return 0 00:25:16.674 00:59:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.674 00:59:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:16.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.674 --rc genhtml_branch_coverage=1 00:25:16.674 --rc genhtml_function_coverage=1 00:25:16.674 --rc genhtml_legend=1 00:25:16.674 --rc geninfo_all_blocks=1 00:25:16.674 --rc geninfo_unexecuted_blocks=1 00:25:16.674 00:25:16.674 ' 00:25:16.674 00:59:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:16.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.674 --rc genhtml_branch_coverage=1 00:25:16.674 --rc genhtml_function_coverage=1 00:25:16.674 --rc genhtml_legend=1 00:25:16.674 --rc geninfo_all_blocks=1 00:25:16.674 --rc geninfo_unexecuted_blocks=1 00:25:16.674 00:25:16.674 ' 00:25:16.674 00:59:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:16.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.674 --rc genhtml_branch_coverage=1 00:25:16.674 --rc genhtml_function_coverage=1 00:25:16.674 --rc genhtml_legend=1 00:25:16.674 --rc geninfo_all_blocks=1 00:25:16.674 --rc geninfo_unexecuted_blocks=1 00:25:16.674 00:25:16.674 ' 00:25:16.674 00:59:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:16.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.674 --rc genhtml_branch_coverage=1 00:25:16.674 --rc genhtml_function_coverage=1 00:25:16.674 --rc genhtml_legend=1 00:25:16.674 --rc geninfo_all_blocks=1 00:25:16.674 --rc geninfo_unexecuted_blocks=1 00:25:16.674 00:25:16.674 ' 00:25:16.674 00:59:29 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:16.674 00:59:29 -- nvmf/common.sh@7 -- # uname -s 00:25:16.674 00:59:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.674 00:59:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.674 00:59:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.674 00:59:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.674 00:59:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.674 00:59:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.674 00:59:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.674 00:59:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.674 00:59:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.674 00:59:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.674 00:59:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:25:16.674 00:59:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:25:16.674 00:59:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.674 00:59:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.674 00:59:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:16.674 00:59:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:16.674 00:59:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.674 00:59:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.674 00:59:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.674 00:59:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.674 00:59:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.675 00:59:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.675 00:59:29 -- paths/export.sh@5 -- # export PATH 00:25:16.675 00:59:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.675 00:59:29 -- nvmf/common.sh@46 -- # : 0 00:25:16.675 00:59:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:16.675 00:59:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:16.675 00:59:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:16.675 00:59:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.675 00:59:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.675 00:59:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:16.675 00:59:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:16.675 00:59:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:16.675 00:59:29 -- target/dif.sh@15 -- # NULL_META=16 00:25:16.675 00:59:29 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:16.675 00:59:29 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:16.675 00:59:29 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:16.675 00:59:29 -- target/dif.sh@135 -- # nvmftestinit 00:25:16.675 00:59:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:16.675 00:59:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.675 00:59:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:16.675 00:59:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:16.675 00:59:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:16.675 00:59:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.675 00:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:16.675 00:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.675 00:59:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:16.675 00:59:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:16.675 00:59:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:16.675 00:59:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:16.675 00:59:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:16.675 00:59:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:16.675 00:59:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.675 00:59:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.675 00:59:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:16.675 00:59:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:16.675 00:59:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:16.675 00:59:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:16.675 00:59:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:16.675 00:59:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.675 00:59:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:16.675 00:59:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:16.675 00:59:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:16.675 00:59:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:16.675 00:59:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:16.675 00:59:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:16.675 Cannot find device "nvmf_tgt_br" 00:25:16.675 00:59:29 -- nvmf/common.sh@154 -- # true 00:25:16.675 00:59:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:16.675 Cannot find device "nvmf_tgt_br2" 00:25:16.675 00:59:29 -- nvmf/common.sh@155 -- # true 00:25:16.675 00:59:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:16.675 00:59:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:16.675 Cannot find device "nvmf_tgt_br" 00:25:16.675 00:59:29 -- nvmf/common.sh@157 -- # true 00:25:16.675 00:59:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:16.675 Cannot find device "nvmf_tgt_br2" 00:25:16.675 00:59:29 -- nvmf/common.sh@158 -- # true 00:25:16.675 00:59:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:16.933 00:59:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:16.933 00:59:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:16.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.933 00:59:29 -- nvmf/common.sh@161 -- # true 00:25:16.933 00:59:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:16.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.933 00:59:29 -- nvmf/common.sh@162 -- # true 00:25:16.933 00:59:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:16.933 00:59:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:16.933 00:59:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:16.933 00:59:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:16.933 00:59:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:16.933 00:59:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:16.933 00:59:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:16.933 00:59:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:16.933 00:59:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:16.933 00:59:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:16.933 00:59:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:16.933 00:59:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:16.933 00:59:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:16.933 00:59:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:16.933 00:59:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:16.933 00:59:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:16.933 00:59:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:16.933 00:59:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:16.933 00:59:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:16.933 00:59:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:16.933 00:59:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:16.933 00:59:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:16.933 00:59:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:16.933 00:59:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:16.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:25:16.933 00:25:16.933 --- 10.0.0.2 ping statistics --- 00:25:16.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.933 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:16.933 00:59:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:16.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:16.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:25:16.933 00:25:16.933 --- 10.0.0.3 ping statistics --- 00:25:16.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.933 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:16.933 00:59:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:16.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:25:16.933 00:25:16.933 --- 10.0.0.1 ping statistics --- 00:25:16.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.933 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:25:16.933 00:59:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.933 00:59:29 -- nvmf/common.sh@421 -- # return 0 00:25:16.933 00:59:29 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:16.933 00:59:29 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:17.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:17.500 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:17.500 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:17.500 00:59:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.500 00:59:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:17.501 00:59:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:17.501 00:59:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.501 00:59:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:17.501 00:59:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:17.501 00:59:29 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:17.501 00:59:29 -- target/dif.sh@137 -- # nvmfappstart 00:25:17.501 00:59:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:17.501 00:59:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.501 00:59:29 -- common/autotest_common.sh@10 -- # set +x 00:25:17.501 00:59:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:17.501 00:59:29 -- nvmf/common.sh@469 -- # nvmfpid=102190 00:25:17.501 00:59:29 -- nvmf/common.sh@470 -- # waitforlisten 102190 00:25:17.501 00:59:29 -- common/autotest_common.sh@829 -- # '[' -z 102190 ']' 00:25:17.501 00:59:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.501 00:59:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.501 00:59:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.501 00:59:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.501 00:59:29 -- common/autotest_common.sh@10 -- # set +x 00:25:17.501 [2024-12-03 00:59:29.943075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:17.501 [2024-12-03 00:59:29.943172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.759 [2024-12-03 00:59:30.086771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.759 [2024-12-03 00:59:30.160370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:17.759 [2024-12-03 00:59:30.160556] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.759 [2024-12-03 00:59:30.160574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.759 [2024-12-03 00:59:30.160586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.759 [2024-12-03 00:59:30.160617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.694 00:59:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.694 00:59:30 -- common/autotest_common.sh@862 -- # return 0 00:25:18.694 00:59:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:18.694 00:59:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.694 00:59:30 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 00:59:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.694 00:59:31 -- target/dif.sh@139 -- # create_transport 00:25:18.694 00:59:31 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:18.694 00:59:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.694 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 [2024-12-03 00:59:31.047312] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.694 00:59:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.694 00:59:31 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:18.694 00:59:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:18.694 00:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:18.694 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 ************************************ 00:25:18.694 START TEST fio_dif_1_default 00:25:18.694 ************************************ 00:25:18.694 00:59:31 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:18.694 00:59:31 -- target/dif.sh@86 -- # create_subsystems 0 00:25:18.694 00:59:31 -- target/dif.sh@28 -- # local sub 00:25:18.694 00:59:31 -- target/dif.sh@30 -- # for sub in "$@" 00:25:18.694 00:59:31 -- target/dif.sh@31 -- # create_subsystem 0 00:25:18.694 00:59:31 -- target/dif.sh@18 -- # local sub_id=0 00:25:18.694 00:59:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:18.694 00:59:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.694 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 bdev_null0 00:25:18.694 00:59:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.694 00:59:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:18.694 00:59:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.694 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 00:59:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.694 00:59:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:18.694 00:59:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.694 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 00:59:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.694 00:59:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.694 00:59:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.694 00:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.694 [2024-12-03 00:59:31.091478] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.694 00:59:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.694 00:59:31 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:18.694 00:59:31 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:18.694 00:59:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:18.694 00:59:31 -- nvmf/common.sh@520 -- # config=() 00:25:18.694 00:59:31 -- nvmf/common.sh@520 -- # local subsystem config 00:25:18.694 00:59:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.694 00:59:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:18.694 00:59:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.694 00:59:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:18.694 { 00:25:18.694 "params": { 00:25:18.694 "name": "Nvme$subsystem", 00:25:18.694 "trtype": "$TEST_TRANSPORT", 00:25:18.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.694 "adrfam": "ipv4", 00:25:18.694 "trsvcid": "$NVMF_PORT", 00:25:18.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.694 "hdgst": ${hdgst:-false}, 00:25:18.694 "ddgst": ${ddgst:-false} 00:25:18.694 }, 00:25:18.694 "method": "bdev_nvme_attach_controller" 00:25:18.694 } 00:25:18.694 EOF 00:25:18.694 )") 00:25:18.694 00:59:31 -- target/dif.sh@82 -- # gen_fio_conf 00:25:18.694 00:59:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:18.694 00:59:31 -- target/dif.sh@54 -- # local file 00:25:18.694 00:59:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:18.694 00:59:31 -- target/dif.sh@56 -- # cat 00:25:18.694 00:59:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:18.694 00:59:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:18.694 00:59:31 -- common/autotest_common.sh@1330 -- # shift 00:25:18.694 00:59:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:18.694 00:59:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.694 00:59:31 -- nvmf/common.sh@542 -- # cat 00:25:18.694 00:59:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:18.694 00:59:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:18.694 00:59:31 -- target/dif.sh@72 -- # (( file <= files )) 00:25:18.694 00:59:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:18.694 00:59:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:18.694 00:59:31 -- nvmf/common.sh@544 -- # jq . 00:25:18.694 00:59:31 -- nvmf/common.sh@545 -- # IFS=, 00:25:18.694 00:59:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:18.695 "params": { 00:25:18.695 "name": "Nvme0", 00:25:18.695 "trtype": "tcp", 00:25:18.695 "traddr": "10.0.0.2", 00:25:18.695 "adrfam": "ipv4", 00:25:18.695 "trsvcid": "4420", 00:25:18.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.695 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:18.695 "hdgst": false, 00:25:18.695 "ddgst": false 00:25:18.695 }, 00:25:18.695 "method": "bdev_nvme_attach_controller" 00:25:18.695 }' 00:25:18.695 00:59:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:18.695 00:59:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:18.695 00:59:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.695 00:59:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:18.695 00:59:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:18.695 00:59:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:18.695 00:59:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:18.695 00:59:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:18.695 00:59:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:18.695 00:59:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.953 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:18.953 fio-3.35 00:25:18.953 Starting 1 thread 00:25:19.521 [2024-12-03 00:59:31.736392] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:19.521 [2024-12-03 00:59:31.736493] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:29.491 00:25:29.491 filename0: (groupid=0, jobs=1): err= 0: pid=102276: Tue Dec 3 00:59:41 2024 00:25:29.491 read: IOPS=3076, BW=12.0MiB/s (12.6MB/s)(121MiB/10041msec) 00:25:29.491 slat (usec): min=5, max=202, avg= 6.93, stdev= 3.04 00:25:29.491 clat (usec): min=340, max=42465, avg=1279.32, stdev=5853.12 00:25:29.491 lat (usec): min=346, max=42474, avg=1286.26, stdev=5853.17 00:25:29.491 clat percentiles (usec): 00:25:29.491 | 1.00th=[ 355], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:25:29.491 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 412], 00:25:29.491 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 474], 95.00th=[ 578], 00:25:29.491 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:25:29.491 | 99.99th=[42206] 00:25:29.491 bw ( KiB/s): min= 4224, max=18848, per=100.00%, avg=12353.60, stdev=3960.83, samples=20 00:25:29.491 iops : min= 1056, max= 4712, avg=3088.40, stdev=990.21, samples=20 00:25:29.491 lat (usec) : 500=91.87%, 750=5.93%, 1000=0.04% 00:25:29.491 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=2.12% 00:25:29.491 cpu : usr=89.61%, sys=8.71%, ctx=199, majf=0, minf=0 00:25:29.491 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.491 issued rwts: total=30888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.491 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:29.491 00:25:29.491 Run status group 0 (all jobs): 00:25:29.491 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=121MiB (127MB), run=10041-10041msec 00:25:29.748 00:59:42 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:29.748 00:59:42 -- target/dif.sh@43 -- # local sub 00:25:29.748 00:59:42 -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.748 00:59:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:29.748 00:59:42 -- target/dif.sh@36 -- # local sub_id=0 00:25:29.748 00:59:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:29.748 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.748 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.748 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.748 00:59:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:29.748 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.748 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.748 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.748 00:25:29.748 real 0m11.124s 00:25:29.748 user 0m9.704s 00:25:29.748 sys 0m1.171s 00:25:29.748 00:59:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:29.748 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.748 ************************************ 00:25:29.748 END TEST fio_dif_1_default 00:25:29.748 ************************************ 00:25:29.748 00:59:42 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:29.748 00:59:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:29.748 00:59:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:29.748 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.748 ************************************ 00:25:29.748 START TEST fio_dif_1_multi_subsystems 00:25:29.748 ************************************ 00:25:29.748 00:59:42 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:29.748 00:59:42 -- target/dif.sh@92 -- # local files=1 00:25:29.748 00:59:42 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:29.748 00:59:42 -- target/dif.sh@28 -- # local sub 00:25:29.748 00:59:42 -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.748 00:59:42 -- target/dif.sh@31 -- # create_subsystem 0 00:25:29.748 00:59:42 -- target/dif.sh@18 -- # local sub_id=0 00:25:29.748 00:59:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:29.748 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.748 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.748 bdev_null0 00:25:29.748 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.748 00:59:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:29.748 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.748 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:30.006 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.006 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.006 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.006 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 [2024-12-03 00:59:42.274270] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@30 -- # for sub in "$@" 00:25:30.006 00:59:42 -- target/dif.sh@31 -- # create_subsystem 1 00:25:30.006 00:59:42 -- target/dif.sh@18 -- # local sub_id=1 00:25:30.006 00:59:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:30.006 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.006 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 bdev_null1 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:30.006 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.006 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:30.006 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.006 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.006 00:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.006 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:30.006 00:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.006 00:59:42 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:30.006 00:59:42 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:30.006 00:59:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:30.006 00:59:42 -- nvmf/common.sh@520 -- # config=() 00:25:30.006 00:59:42 -- nvmf/common.sh@520 -- # local subsystem config 00:25:30.006 00:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.006 00:59:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.006 00:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.006 { 00:25:30.006 "params": { 00:25:30.006 "name": "Nvme$subsystem", 00:25:30.006 "trtype": "$TEST_TRANSPORT", 00:25:30.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.006 "adrfam": "ipv4", 00:25:30.006 "trsvcid": "$NVMF_PORT", 00:25:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.006 "hdgst": ${hdgst:-false}, 00:25:30.006 "ddgst": ${ddgst:-false} 00:25:30.006 }, 00:25:30.006 "method": "bdev_nvme_attach_controller" 00:25:30.006 } 00:25:30.006 EOF 00:25:30.006 )") 00:25:30.006 00:59:42 -- target/dif.sh@82 -- # gen_fio_conf 00:25:30.006 00:59:42 -- target/dif.sh@54 -- # local file 00:25:30.006 00:59:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.006 00:59:42 -- target/dif.sh@56 -- # cat 00:25:30.006 00:59:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:30.006 00:59:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:30.006 00:59:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:30.006 00:59:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.006 00:59:42 -- common/autotest_common.sh@1330 -- # shift 00:25:30.006 00:59:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:30.006 00:59:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.006 00:59:42 -- nvmf/common.sh@542 -- # cat 00:25:30.006 00:59:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:30.006 00:59:42 -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.006 00:59:42 -- target/dif.sh@73 -- # cat 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:30.006 00:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.006 00:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.006 { 00:25:30.006 "params": { 00:25:30.006 "name": "Nvme$subsystem", 00:25:30.006 "trtype": "$TEST_TRANSPORT", 00:25:30.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.006 "adrfam": "ipv4", 00:25:30.006 "trsvcid": "$NVMF_PORT", 00:25:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.006 "hdgst": ${hdgst:-false}, 00:25:30.006 "ddgst": ${ddgst:-false} 00:25:30.006 }, 00:25:30.006 "method": "bdev_nvme_attach_controller" 00:25:30.006 } 00:25:30.006 EOF 00:25:30.006 )") 00:25:30.006 00:59:42 -- nvmf/common.sh@542 -- # cat 00:25:30.006 00:59:42 -- target/dif.sh@72 -- # (( file++ )) 00:25:30.006 00:59:42 -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.006 00:59:42 -- nvmf/common.sh@544 -- # jq . 00:25:30.006 00:59:42 -- nvmf/common.sh@545 -- # IFS=, 00:25:30.006 00:59:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:30.006 "params": { 00:25:30.006 "name": "Nvme0", 00:25:30.006 "trtype": "tcp", 00:25:30.006 "traddr": "10.0.0.2", 00:25:30.006 "adrfam": "ipv4", 00:25:30.006 "trsvcid": "4420", 00:25:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:30.006 "hdgst": false, 00:25:30.006 "ddgst": false 00:25:30.006 }, 00:25:30.006 "method": "bdev_nvme_attach_controller" 00:25:30.006 },{ 00:25:30.006 "params": { 00:25:30.006 "name": "Nvme1", 00:25:30.006 "trtype": "tcp", 00:25:30.006 "traddr": "10.0.0.2", 00:25:30.006 "adrfam": "ipv4", 00:25:30.006 "trsvcid": "4420", 00:25:30.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.006 "hdgst": false, 00:25:30.006 "ddgst": false 00:25:30.006 }, 00:25:30.006 "method": "bdev_nvme_attach_controller" 00:25:30.006 }' 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:30.006 00:59:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:30.006 00:59:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:30.006 00:59:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:30.006 00:59:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:30.006 00:59:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:30.006 00:59:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.263 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:30.264 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:30.264 fio-3.35 00:25:30.264 Starting 2 threads 00:25:30.830 [2024-12-03 00:59:43.076777] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:30.830 [2024-12-03 00:59:43.076844] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:40.799 00:25:40.799 filename0: (groupid=0, jobs=1): err= 0: pid=102441: Tue Dec 3 00:59:53 2024 00:25:40.799 read: IOPS=779, BW=3118KiB/s (3192kB/s)(30.5MiB/10023msec) 00:25:40.799 slat (usec): min=5, max=156, avg= 7.37, stdev= 3.20 00:25:40.799 clat (usec): min=375, max=42522, avg=5109.98, stdev=12929.37 00:25:40.799 lat (usec): min=381, max=42531, avg=5117.35, stdev=12929.49 00:25:40.799 clat percentiles (usec): 00:25:40.799 | 1.00th=[ 383], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 400], 00:25:40.799 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:25:40.799 | 70.00th=[ 449], 80.00th=[ 537], 90.00th=[40633], 95.00th=[41157], 00:25:40.799 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:25:40.799 | 99.99th=[42730] 00:25:40.799 bw ( KiB/s): min= 1184, max= 4672, per=42.48%, avg=3122.55, stdev=886.78, samples=20 00:25:40.799 iops : min= 296, max= 1168, avg=780.60, stdev=221.70, samples=20 00:25:40.799 lat (usec) : 500=78.76%, 750=8.51%, 1000=1.14% 00:25:40.799 lat (msec) : 2=0.06%, 50=11.52% 00:25:40.799 cpu : usr=94.98%, sys=4.13%, ctx=90, majf=0, minf=9 00:25:40.799 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.799 issued rwts: total=7812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.799 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:40.799 filename1: (groupid=0, jobs=1): err= 0: pid=102442: Tue Dec 3 00:59:53 2024 00:25:40.799 read: IOPS=1060, BW=4241KiB/s (4343kB/s)(41.4MiB/10002msec) 00:25:40.799 slat (nsec): min=5772, max=35059, avg=7167.99, stdev=2469.02 00:25:40.799 clat (usec): min=370, max=42447, avg=3751.67, stdev=11104.21 00:25:40.799 lat (usec): min=376, max=42456, avg=3758.84, stdev=11104.31 00:25:40.799 clat percentiles (usec): 00:25:40.799 | 1.00th=[ 383], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 396], 00:25:40.799 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 00:25:40.799 | 70.00th=[ 437], 80.00th=[ 465], 90.00th=[ 725], 95.00th=[40633], 00:25:40.799 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:40.799 | 99.99th=[42206] 00:25:40.799 bw ( KiB/s): min= 1248, max= 8544, per=58.07%, avg=4268.26, stdev=1572.74, samples=19 00:25:40.799 iops : min= 312, max= 2136, avg=1067.05, stdev=393.17, samples=19 00:25:40.799 lat (usec) : 500=84.10%, 750=7.00%, 1000=0.68% 00:25:40.799 lat (msec) : 2=0.04%, 50=8.19% 00:25:40.799 cpu : usr=94.69%, sys=4.61%, ctx=11, majf=0, minf=0 00:25:40.799 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.799 issued rwts: total=10604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.799 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:40.799 00:25:40.799 Run status group 0 (all jobs): 00:25:40.799 READ: bw=7349KiB/s (7526kB/s), 3118KiB/s-4241KiB/s (3192kB/s-4343kB/s), io=71.9MiB (75.4MB), run=10002-10023msec 00:25:41.058 00:59:53 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:41.058 00:59:53 -- target/dif.sh@43 -- # local sub 00:25:41.058 00:59:53 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.058 00:59:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:41.058 00:59:53 -- target/dif.sh@36 -- # local sub_id=0 00:25:41.058 00:59:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.058 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.058 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.058 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.058 00:59:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:41.058 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.058 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.058 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.058 00:59:53 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.058 00:59:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:41.058 00:59:53 -- target/dif.sh@36 -- # local sub_id=1 00:25:41.058 00:59:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.058 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.058 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.058 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.058 00:59:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:41.058 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.058 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.058 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.058 00:25:41.058 real 0m11.282s 00:25:41.058 user 0m19.869s 00:25:41.058 sys 0m1.196s 00:25:41.058 00:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:41.058 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.058 ************************************ 00:25:41.058 END TEST fio_dif_1_multi_subsystems 00:25:41.058 ************************************ 00:25:41.058 00:59:53 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:41.058 00:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:41.058 00:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:41.058 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.317 ************************************ 00:25:41.317 START TEST fio_dif_rand_params 00:25:41.317 ************************************ 00:25:41.317 00:59:53 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:25:41.317 00:59:53 -- target/dif.sh@100 -- # local NULL_DIF 00:25:41.317 00:59:53 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:41.317 00:59:53 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:41.317 00:59:53 -- target/dif.sh@103 -- # bs=128k 00:25:41.317 00:59:53 -- target/dif.sh@103 -- # numjobs=3 00:25:41.317 00:59:53 -- target/dif.sh@103 -- # iodepth=3 00:25:41.318 00:59:53 -- target/dif.sh@103 -- # runtime=5 00:25:41.318 00:59:53 -- target/dif.sh@105 -- # create_subsystems 0 00:25:41.318 00:59:53 -- target/dif.sh@28 -- # local sub 00:25:41.318 00:59:53 -- target/dif.sh@30 -- # for sub in "$@" 00:25:41.318 00:59:53 -- target/dif.sh@31 -- # create_subsystem 0 00:25:41.318 00:59:53 -- target/dif.sh@18 -- # local sub_id=0 00:25:41.318 00:59:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:41.318 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.318 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.318 bdev_null0 00:25:41.318 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.318 00:59:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:41.318 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.318 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.318 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.318 00:59:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:41.318 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.318 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.318 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.318 00:59:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.318 00:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.318 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:25:41.318 [2024-12-03 00:59:53.615839] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.318 00:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.318 00:59:53 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:41.318 00:59:53 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:41.318 00:59:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:41.318 00:59:53 -- nvmf/common.sh@520 -- # config=() 00:25:41.318 00:59:53 -- nvmf/common.sh@520 -- # local subsystem config 00:25:41.318 00:59:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.318 00:59:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.318 00:59:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.318 { 00:25:41.318 "params": { 00:25:41.318 "name": "Nvme$subsystem", 00:25:41.318 "trtype": "$TEST_TRANSPORT", 00:25:41.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.318 "adrfam": "ipv4", 00:25:41.318 "trsvcid": "$NVMF_PORT", 00:25:41.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.318 "hdgst": ${hdgst:-false}, 00:25:41.318 "ddgst": ${ddgst:-false} 00:25:41.318 }, 00:25:41.318 "method": "bdev_nvme_attach_controller" 00:25:41.318 } 00:25:41.318 EOF 00:25:41.318 )") 00:25:41.318 00:59:53 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.318 00:59:53 -- target/dif.sh@82 -- # gen_fio_conf 00:25:41.318 00:59:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:41.318 00:59:53 -- target/dif.sh@54 -- # local file 00:25:41.318 00:59:53 -- target/dif.sh@56 -- # cat 00:25:41.318 00:59:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:41.318 00:59:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:41.318 00:59:53 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.318 00:59:53 -- common/autotest_common.sh@1330 -- # shift 00:25:41.318 00:59:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:41.318 00:59:53 -- nvmf/common.sh@542 -- # cat 00:25:41.318 00:59:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.318 00:59:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:41.318 00:59:53 -- target/dif.sh@72 -- # (( file <= files )) 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:41.318 00:59:53 -- nvmf/common.sh@544 -- # jq . 00:25:41.318 00:59:53 -- nvmf/common.sh@545 -- # IFS=, 00:25:41.318 00:59:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:41.318 "params": { 00:25:41.318 "name": "Nvme0", 00:25:41.318 "trtype": "tcp", 00:25:41.318 "traddr": "10.0.0.2", 00:25:41.318 "adrfam": "ipv4", 00:25:41.318 "trsvcid": "4420", 00:25:41.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:41.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:41.318 "hdgst": false, 00:25:41.318 "ddgst": false 00:25:41.318 }, 00:25:41.318 "method": "bdev_nvme_attach_controller" 00:25:41.318 }' 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:41.318 00:59:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:41.318 00:59:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:41.318 00:59:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:41.318 00:59:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:41.318 00:59:53 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:41.318 00:59:53 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.577 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:41.577 ... 00:25:41.577 fio-3.35 00:25:41.577 Starting 3 threads 00:25:41.836 [2024-12-03 00:59:54.253118] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:41.836 [2024-12-03 00:59:54.253207] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:47.170 00:25:47.170 filename0: (groupid=0, jobs=1): err= 0: pid=102599: Tue Dec 3 00:59:59 2024 00:25:47.170 read: IOPS=258, BW=32.3MiB/s (33.8MB/s)(162MiB/5021msec) 00:25:47.170 slat (nsec): min=5983, max=46961, avg=12173.08, stdev=5364.95 00:25:47.170 clat (usec): min=4393, max=53156, avg=11605.31, stdev=10382.85 00:25:47.170 lat (usec): min=4402, max=53162, avg=11617.49, stdev=10383.00 00:25:47.170 clat percentiles (usec): 00:25:47.170 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6521], 00:25:47.170 | 30.00th=[ 7046], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:25:47.170 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11731], 95.00th=[47449], 00:25:47.170 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:25:47.170 | 99.99th=[53216] 00:25:47.170 bw ( KiB/s): min=20736, max=41728, per=30.72%, avg=33107.70, stdev=5569.10, samples=10 00:25:47.170 iops : min= 162, max= 326, avg=258.60, stdev=43.49, samples=10 00:25:47.170 lat (msec) : 10=58.56%, 20=34.49%, 50=4.63%, 100=2.31% 00:25:47.170 cpu : usr=93.98%, sys=4.56%, ctx=4, majf=0, minf=0 00:25:47.170 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.170 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.170 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:47.170 filename0: (groupid=0, jobs=1): err= 0: pid=102600: Tue Dec 3 00:59:59 2024 00:25:47.170 read: IOPS=238, BW=29.9MiB/s (31.3MB/s)(150MiB/5021msec) 00:25:47.170 slat (nsec): min=5676, max=57292, avg=13966.89, stdev=6626.95 00:25:47.170 clat (usec): min=2800, max=52038, avg=12533.40, stdev=12416.65 00:25:47.170 lat (usec): min=2810, max=52047, avg=12547.37, stdev=12416.74 00:25:47.170 clat percentiles (usec): 00:25:47.170 | 1.00th=[ 3425], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6915], 00:25:47.170 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:25:47.170 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[45351], 95.00th=[49021], 00:25:47.170 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:25:47.170 | 99.99th=[52167] 00:25:47.170 bw ( KiB/s): min=23808, max=45312, per=28.43%, avg=30635.10, stdev=7343.60, samples=10 00:25:47.170 iops : min= 186, max= 354, avg=239.30, stdev=57.32, samples=10 00:25:47.170 lat (msec) : 4=1.42%, 10=81.42%, 20=6.92%, 50=7.42%, 100=2.83% 00:25:47.170 cpu : usr=94.96%, sys=3.80%, ctx=10, majf=0, minf=0 00:25:47.170 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.170 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.170 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:47.170 filename0: (groupid=0, jobs=1): err= 0: pid=102601: Tue Dec 3 00:59:59 2024 00:25:47.170 read: IOPS=346, BW=43.3MiB/s (45.4MB/s)(216MiB/5001msec) 00:25:47.170 slat (nsec): min=5808, max=48279, avg=9495.92, stdev=5231.28 00:25:47.170 clat (usec): min=3527, max=45784, avg=8643.43, stdev=3897.16 00:25:47.170 lat (usec): min=3533, max=45790, avg=8652.92, stdev=3897.69 00:25:47.170 clat percentiles (usec): 00:25:47.170 | 1.00th=[ 3589], 5.00th=[ 3589], 10.00th=[ 3654], 20.00th=[ 3785], 00:25:47.170 | 30.00th=[ 6980], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 9241], 00:25:47.170 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13173], 95.00th=[13698], 00:25:47.170 | 99.00th=[14484], 99.50th=[15795], 99.90th=[44303], 99.95th=[45876], 00:25:47.170 | 99.99th=[45876] 00:25:47.170 bw ( KiB/s): min=28416, max=52992, per=40.15%, avg=43264.00, stdev=7544.40, samples=9 00:25:47.171 iops : min= 222, max= 414, avg=338.00, stdev=58.94, samples=9 00:25:47.171 lat (msec) : 4=24.03%, 10=39.05%, 20=36.74%, 50=0.17% 00:25:47.171 cpu : usr=92.72%, sys=5.38%, ctx=4, majf=0, minf=9 00:25:47.171 IO depths : 1=32.0%, 2=68.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.171 issued rwts: total=1731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.171 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:47.171 00:25:47.171 Run status group 0 (all jobs): 00:25:47.171 READ: bw=105MiB/s (110MB/s), 29.9MiB/s-43.3MiB/s (31.3MB/s-45.4MB/s), io=528MiB (554MB), run=5001-5021msec 00:25:47.171 00:59:59 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:47.171 00:59:59 -- target/dif.sh@43 -- # local sub 00:25:47.171 00:59:59 -- target/dif.sh@45 -- # for sub in "$@" 00:25:47.171 00:59:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:47.171 00:59:59 -- target/dif.sh@36 -- # local sub_id=0 00:25:47.171 00:59:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:47.171 00:59:59 -- target/dif.sh@109 -- # bs=4k 00:25:47.171 00:59:59 -- target/dif.sh@109 -- # numjobs=8 00:25:47.171 00:59:59 -- target/dif.sh@109 -- # iodepth=16 00:25:47.171 00:59:59 -- target/dif.sh@109 -- # runtime= 00:25:47.171 00:59:59 -- target/dif.sh@109 -- # files=2 00:25:47.171 00:59:59 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:47.171 00:59:59 -- target/dif.sh@28 -- # local sub 00:25:47.171 00:59:59 -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.171 00:59:59 -- target/dif.sh@31 -- # create_subsystem 0 00:25:47.171 00:59:59 -- target/dif.sh@18 -- # local sub_id=0 00:25:47.171 00:59:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 bdev_null0 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 [2024-12-03 00:59:59.624789] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.171 00:59:59 -- target/dif.sh@31 -- # create_subsystem 1 00:25:47.171 00:59:59 -- target/dif.sh@18 -- # local sub_id=1 00:25:47.171 00:59:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 bdev_null1 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.171 00:59:59 -- target/dif.sh@31 -- # create_subsystem 2 00:25:47.171 00:59:59 -- target/dif.sh@18 -- # local sub_id=2 00:25:47.171 00:59:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 bdev_null2 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.171 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.171 00:59:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:47.171 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.171 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.430 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.430 00:59:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:47.430 00:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.430 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.430 00:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.430 00:59:59 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:47.430 00:59:59 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:47.430 00:59:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:47.430 00:59:59 -- nvmf/common.sh@520 -- # config=() 00:25:47.430 00:59:59 -- nvmf/common.sh@520 -- # local subsystem config 00:25:47.430 00:59:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.430 00:59:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.430 { 00:25:47.430 "params": { 00:25:47.430 "name": "Nvme$subsystem", 00:25:47.430 "trtype": "$TEST_TRANSPORT", 00:25:47.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.430 "adrfam": "ipv4", 00:25:47.430 "trsvcid": "$NVMF_PORT", 00:25:47.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.430 "hdgst": ${hdgst:-false}, 00:25:47.430 "ddgst": ${ddgst:-false} 00:25:47.430 }, 00:25:47.430 "method": "bdev_nvme_attach_controller" 00:25:47.430 } 00:25:47.430 EOF 00:25:47.430 )") 00:25:47.430 00:59:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.430 00:59:59 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.430 00:59:59 -- target/dif.sh@82 -- # gen_fio_conf 00:25:47.430 00:59:59 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:47.430 00:59:59 -- target/dif.sh@54 -- # local file 00:25:47.430 00:59:59 -- nvmf/common.sh@542 -- # cat 00:25:47.430 00:59:59 -- target/dif.sh@56 -- # cat 00:25:47.430 00:59:59 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.430 00:59:59 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:47.430 00:59:59 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:47.430 00:59:59 -- common/autotest_common.sh@1330 -- # shift 00:25:47.430 00:59:59 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:47.430 00:59:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.430 00:59:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.430 00:59:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.431 { 00:25:47.431 "params": { 00:25:47.431 "name": "Nvme$subsystem", 00:25:47.431 "trtype": "$TEST_TRANSPORT", 00:25:47.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.431 "adrfam": "ipv4", 00:25:47.431 "trsvcid": "$NVMF_PORT", 00:25:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.431 "hdgst": ${hdgst:-false}, 00:25:47.431 "ddgst": ${ddgst:-false} 00:25:47.431 }, 00:25:47.431 "method": "bdev_nvme_attach_controller" 00:25:47.431 } 00:25:47.431 EOF 00:25:47.431 )") 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:47.431 00:59:59 -- nvmf/common.sh@542 -- # cat 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:47.431 00:59:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:47.431 00:59:59 -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.431 00:59:59 -- target/dif.sh@73 -- # cat 00:25:47.431 00:59:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.431 00:59:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.431 { 00:25:47.431 "params": { 00:25:47.431 "name": "Nvme$subsystem", 00:25:47.431 "trtype": "$TEST_TRANSPORT", 00:25:47.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.431 "adrfam": "ipv4", 00:25:47.431 "trsvcid": "$NVMF_PORT", 00:25:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.431 "hdgst": ${hdgst:-false}, 00:25:47.431 "ddgst": ${ddgst:-false} 00:25:47.431 }, 00:25:47.431 "method": "bdev_nvme_attach_controller" 00:25:47.431 } 00:25:47.431 EOF 00:25:47.431 )") 00:25:47.431 00:59:59 -- target/dif.sh@72 -- # (( file++ )) 00:25:47.431 00:59:59 -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.431 00:59:59 -- target/dif.sh@73 -- # cat 00:25:47.431 00:59:59 -- nvmf/common.sh@542 -- # cat 00:25:47.431 00:59:59 -- nvmf/common.sh@544 -- # jq . 00:25:47.431 00:59:59 -- target/dif.sh@72 -- # (( file++ )) 00:25:47.431 00:59:59 -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.431 00:59:59 -- nvmf/common.sh@545 -- # IFS=, 00:25:47.431 00:59:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:47.431 "params": { 00:25:47.431 "name": "Nvme0", 00:25:47.431 "trtype": "tcp", 00:25:47.431 "traddr": "10.0.0.2", 00:25:47.431 "adrfam": "ipv4", 00:25:47.431 "trsvcid": "4420", 00:25:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.431 "hdgst": false, 00:25:47.431 "ddgst": false 00:25:47.431 }, 00:25:47.431 "method": "bdev_nvme_attach_controller" 00:25:47.431 },{ 00:25:47.431 "params": { 00:25:47.431 "name": "Nvme1", 00:25:47.431 "trtype": "tcp", 00:25:47.431 "traddr": "10.0.0.2", 00:25:47.431 "adrfam": "ipv4", 00:25:47.431 "trsvcid": "4420", 00:25:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.431 "hdgst": false, 00:25:47.431 "ddgst": false 00:25:47.431 }, 00:25:47.431 "method": "bdev_nvme_attach_controller" 00:25:47.431 },{ 00:25:47.431 "params": { 00:25:47.431 "name": "Nvme2", 00:25:47.431 "trtype": "tcp", 00:25:47.431 "traddr": "10.0.0.2", 00:25:47.431 "adrfam": "ipv4", 00:25:47.431 "trsvcid": "4420", 00:25:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:47.431 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:47.431 "hdgst": false, 00:25:47.431 "ddgst": false 00:25:47.431 }, 00:25:47.431 "method": "bdev_nvme_attach_controller" 00:25:47.431 }' 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:47.431 00:59:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:47.431 00:59:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:47.431 00:59:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:47.431 00:59:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:47.431 00:59:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:47.431 00:59:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.431 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:47.431 ... 00:25:47.431 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:47.431 ... 00:25:47.431 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:47.431 ... 00:25:47.431 fio-3.35 00:25:47.431 Starting 24 threads 00:25:48.368 [2024-12-03 01:00:00.535604] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:48.368 [2024-12-03 01:00:00.535676] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:58.331 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102697: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=257, BW=1031KiB/s (1055kB/s)(10.1MiB/10053msec) 00:25:58.331 slat (usec): min=6, max=8017, avg=15.39, stdev=168.40 00:25:58.331 clat (msec): min=21, max=157, avg=61.99, stdev=20.63 00:25:58.331 lat (msec): min=21, max=157, avg=62.01, stdev=20.63 00:25:58.331 clat percentiles (msec): 00:25:58.331 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 44], 00:25:58.331 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 66], 00:25:58.331 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 104], 00:25:58.331 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 157], 99.95th=[ 157], 00:25:58.331 | 99.99th=[ 159] 00:25:58.331 bw ( KiB/s): min= 688, max= 1376, per=4.89%, avg=1029.70, stdev=202.98, samples=20 00:25:58.331 iops : min= 172, max= 344, avg=257.40, stdev=50.74, samples=20 00:25:58.331 lat (msec) : 50=32.82%, 100=61.51%, 250=5.68% 00:25:58.331 cpu : usr=43.13%, sys=0.61%, ctx=1286, majf=0, minf=9 00:25:58.331 IO depths : 1=0.3%, 2=0.7%, 4=7.1%, 8=78.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 complete : 0=0.0%, 4=89.1%, 8=6.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 issued rwts: total=2590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102698: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=224, BW=897KiB/s (918kB/s)(8996KiB/10034msec) 00:25:58.331 slat (usec): min=5, max=4034, avg=13.74, stdev=85.14 00:25:58.331 clat (msec): min=23, max=163, avg=71.16, stdev=22.81 00:25:58.331 lat (msec): min=23, max=163, avg=71.18, stdev=22.81 00:25:58.331 clat percentiles (msec): 00:25:58.331 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 50], 00:25:58.331 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 74], 00:25:58.331 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 115], 00:25:58.331 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:25:58.331 | 99.99th=[ 163] 00:25:58.331 bw ( KiB/s): min= 512, max= 1264, per=4.24%, avg=893.25, stdev=202.51, samples=20 00:25:58.331 iops : min= 128, max= 316, avg=223.30, stdev=50.64, samples=20 00:25:58.331 lat (msec) : 50=21.74%, 100=68.52%, 250=9.74% 00:25:58.331 cpu : usr=34.53%, sys=0.53%, ctx=907, majf=0, minf=9 00:25:58.331 IO depths : 1=1.7%, 2=3.7%, 4=11.0%, 8=71.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102699: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=202, BW=809KiB/s (829kB/s)(8116KiB/10028msec) 00:25:58.331 slat (usec): min=3, max=6564, avg=18.57, stdev=198.26 00:25:58.331 clat (msec): min=31, max=173, avg=78.89, stdev=26.74 00:25:58.331 lat (msec): min=31, max=173, avg=78.91, stdev=26.74 00:25:58.331 clat percentiles (msec): 00:25:58.331 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 57], 00:25:58.331 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 85], 00:25:58.331 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 117], 95.00th=[ 131], 00:25:58.331 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 174], 99.95th=[ 174], 00:25:58.331 | 99.99th=[ 174] 00:25:58.331 bw ( KiB/s): min= 509, max= 1200, per=3.82%, avg=804.80, stdev=204.41, samples=20 00:25:58.331 iops : min= 127, max= 300, avg=201.15, stdev=51.16, samples=20 00:25:58.331 lat (msec) : 50=14.49%, 100=66.14%, 250=19.37% 00:25:58.331 cpu : usr=34.42%, sys=0.39%, ctx=1116, majf=0, minf=9 00:25:58.331 IO depths : 1=0.7%, 2=1.7%, 4=7.5%, 8=76.4%, 16=13.7%, 32=0.0%, >=64=0.0% 00:25:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 complete : 0=0.0%, 4=89.5%, 8=6.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102700: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.86MiB/10073msec) 00:25:58.331 slat (nsec): min=4730, max=60211, avg=11306.49, stdev=6792.25 00:25:58.331 clat (usec): min=1428, max=170066, avg=63651.62, stdev=24998.96 00:25:58.331 lat (usec): min=1435, max=170096, avg=63662.93, stdev=25000.02 00:25:58.331 clat percentiles (usec): 00:25:58.331 | 1.00th=[ 1549], 5.00th=[ 19268], 10.00th=[ 35914], 20.00th=[ 44827], 00:25:58.331 | 30.00th=[ 49546], 40.00th=[ 58983], 50.00th=[ 61604], 60.00th=[ 69731], 00:25:58.331 | 70.00th=[ 73925], 80.00th=[ 83362], 90.00th=[ 94897], 95.00th=[105382], 00:25:58.331 | 99.00th=[123208], 99.50th=[143655], 99.90th=[170918], 99.95th=[170918], 00:25:58.331 | 99.99th=[170918] 00:25:58.331 bw ( KiB/s): min= 640, max= 1667, per=4.77%, avg=1003.05, stdev=226.50, samples=20 00:25:58.331 iops : min= 160, max= 416, avg=250.65, stdev=56.48, samples=20 00:25:58.331 lat (msec) : 2=1.27%, 4=1.27%, 10=1.27%, 20=1.27%, 50=25.45% 00:25:58.331 lat (msec) : 100=62.58%, 250=6.90% 00:25:58.331 cpu : usr=33.83%, sys=0.44%, ctx=955, majf=0, minf=9 00:25:58.331 IO depths : 1=0.7%, 2=1.6%, 4=7.2%, 8=77.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 complete : 0=0.0%, 4=89.5%, 8=6.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102701: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=221, BW=886KiB/s (907kB/s)(8904KiB/10050msec) 00:25:58.331 slat (usec): min=6, max=8018, avg=15.71, stdev=169.86 00:25:58.331 clat (msec): min=23, max=139, avg=72.10, stdev=21.08 00:25:58.331 lat (msec): min=23, max=139, avg=72.12, stdev=21.08 00:25:58.331 clat percentiles (msec): 00:25:58.331 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:25:58.331 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 74], 00:25:58.331 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 111], 00:25:58.331 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:25:58.331 | 99.99th=[ 140] 00:25:58.331 bw ( KiB/s): min= 640, max= 1088, per=4.20%, avg=884.05, stdev=122.46, samples=20 00:25:58.331 iops : min= 160, max= 272, avg=221.00, stdev=30.63, samples=20 00:25:58.331 lat (msec) : 50=14.11%, 100=76.86%, 250=9.03% 00:25:58.331 cpu : usr=33.76%, sys=0.52%, ctx=891, majf=0, minf=9 00:25:58.331 IO depths : 1=1.3%, 2=3.1%, 4=11.2%, 8=72.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102702: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=199, BW=797KiB/s (816kB/s)(7984KiB/10017msec) 00:25:58.331 slat (usec): min=4, max=4034, avg=16.41, stdev=127.26 00:25:58.331 clat (msec): min=19, max=192, avg=80.17, stdev=24.24 00:25:58.331 lat (msec): min=19, max=192, avg=80.19, stdev=24.24 00:25:58.331 clat percentiles (msec): 00:25:58.331 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:25:58.331 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 87], 00:25:58.331 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:25:58.331 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 192], 00:25:58.331 | 99.99th=[ 192] 00:25:58.331 bw ( KiB/s): min= 512, max= 1024, per=3.77%, avg=793.16, stdev=149.07, samples=19 00:25:58.331 iops : min= 128, max= 256, avg=198.26, stdev=37.28, samples=19 00:25:58.331 lat (msec) : 20=0.80%, 50=6.51%, 100=76.25%, 250=16.43% 00:25:58.331 cpu : usr=36.94%, sys=0.48%, ctx=1024, majf=0, minf=9 00:25:58.331 IO depths : 1=2.3%, 2=5.5%, 4=15.1%, 8=66.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.331 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.331 filename0: (groupid=0, jobs=1): err= 0: pid=102703: Tue Dec 3 01:00:10 2024 00:25:58.331 read: IOPS=194, BW=779KiB/s (798kB/s)(7820KiB/10035msec) 00:25:58.331 slat (usec): min=4, max=8028, avg=18.22, stdev=202.76 00:25:58.331 clat (msec): min=31, max=156, avg=81.90, stdev=24.87 00:25:58.331 lat (msec): min=31, max=156, avg=81.92, stdev=24.87 00:25:58.331 clat percentiles (msec): 00:25:58.331 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 61], 00:25:58.331 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 89], 00:25:58.331 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 115], 95.00th=[ 124], 00:25:58.331 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:25:58.331 | 99.99th=[ 157] 00:25:58.331 bw ( KiB/s): min= 512, max= 1200, per=3.68%, avg=775.35, stdev=182.62, samples=20 00:25:58.331 iops : min= 128, max= 300, avg=193.80, stdev=45.69, samples=20 00:25:58.332 lat (msec) : 50=7.57%, 100=71.05%, 250=21.38% 00:25:58.332 cpu : usr=40.73%, sys=0.66%, ctx=1252, majf=0, minf=9 00:25:58.332 IO depths : 1=3.0%, 2=6.4%, 4=15.9%, 8=64.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename0: (groupid=0, jobs=1): err= 0: pid=102704: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=222, BW=889KiB/s (910kB/s)(8900KiB/10011msec) 00:25:58.332 slat (usec): min=5, max=4015, avg=13.86, stdev=85.20 00:25:58.332 clat (msec): min=19, max=146, avg=71.90, stdev=26.71 00:25:58.332 lat (msec): min=19, max=146, avg=71.91, stdev=26.71 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 47], 00:25:58.332 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 77], 00:25:58.332 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:25:58.332 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:25:58.332 | 99.99th=[ 148] 00:25:58.332 bw ( KiB/s): min= 560, max= 1424, per=4.20%, avg=883.60, stdev=258.70, samples=20 00:25:58.332 iops : min= 140, max= 356, avg=220.90, stdev=64.68, samples=20 00:25:58.332 lat (msec) : 20=0.18%, 50=27.37%, 100=55.78%, 250=16.67% 00:25:58.332 cpu : usr=37.97%, sys=0.63%, ctx=1081, majf=0, minf=9 00:25:58.332 IO depths : 1=0.6%, 2=1.6%, 4=7.5%, 8=76.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=89.8%, 8=6.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102705: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=203, BW=815KiB/s (834kB/s)(8160KiB/10017msec) 00:25:58.332 slat (usec): min=3, max=8033, avg=20.95, stdev=255.06 00:25:58.332 clat (msec): min=27, max=173, avg=78.35, stdev=25.98 00:25:58.332 lat (msec): min=27, max=173, avg=78.37, stdev=25.98 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 58], 00:25:58.332 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 83], 00:25:58.332 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 117], 95.00th=[ 129], 00:25:58.332 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 174], 99.95th=[ 174], 00:25:58.332 | 99.99th=[ 174] 00:25:58.332 bw ( KiB/s): min= 510, max= 1142, per=3.87%, avg=814.65, stdev=193.83, samples=20 00:25:58.332 iops : min= 127, max= 285, avg=203.60, stdev=48.47, samples=20 00:25:58.332 lat (msec) : 50=13.77%, 100=65.69%, 250=20.54% 00:25:58.332 cpu : usr=33.22%, sys=0.44%, ctx=928, majf=0, minf=9 00:25:58.332 IO depths : 1=1.4%, 2=3.1%, 4=11.1%, 8=72.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102706: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=227, BW=910KiB/s (931kB/s)(9128KiB/10036msec) 00:25:58.332 slat (usec): min=5, max=4048, avg=16.64, stdev=145.95 00:25:58.332 clat (msec): min=28, max=170, avg=70.23, stdev=23.28 00:25:58.332 lat (msec): min=28, max=170, avg=70.25, stdev=23.28 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 50], 00:25:58.332 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 74], 00:25:58.332 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 99], 95.00th=[ 113], 00:25:58.332 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 171], 99.95th=[ 171], 00:25:58.332 | 99.99th=[ 171] 00:25:58.332 bw ( KiB/s): min= 528, max= 1200, per=4.31%, avg=906.40, stdev=201.39, samples=20 00:25:58.332 iops : min= 132, max= 300, avg=226.60, stdev=50.35, samples=20 00:25:58.332 lat (msec) : 50=20.29%, 100=70.16%, 250=9.55% 00:25:58.332 cpu : usr=44.68%, sys=0.48%, ctx=1305, majf=0, minf=9 00:25:58.332 IO depths : 1=1.2%, 2=2.9%, 4=10.2%, 8=73.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102707: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=200, BW=800KiB/s (819kB/s)(8020KiB/10022msec) 00:25:58.332 slat (usec): min=4, max=3301, avg=15.57, stdev=92.48 00:25:58.332 clat (msec): min=23, max=167, avg=79.79, stdev=25.62 00:25:58.332 lat (msec): min=23, max=167, avg=79.81, stdev=25.62 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 49], 20.00th=[ 56], 00:25:58.332 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 85], 00:25:58.332 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 129], 00:25:58.332 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 167], 00:25:58.332 | 99.99th=[ 167] 00:25:58.332 bw ( KiB/s): min= 512, max= 1208, per=3.78%, avg=795.10, stdev=200.72, samples=20 00:25:58.332 iops : min= 128, max= 302, avg=198.75, stdev=50.21, samples=20 00:25:58.332 lat (msec) : 50=11.37%, 100=67.23%, 250=21.40% 00:25:58.332 cpu : usr=40.37%, sys=0.59%, ctx=1127, majf=0, minf=9 00:25:58.332 IO depths : 1=2.7%, 2=5.9%, 4=15.1%, 8=65.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102708: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=190, BW=764KiB/s (782kB/s)(7656KiB/10024msec) 00:25:58.332 slat (nsec): min=3809, max=46819, avg=12574.86, stdev=7877.91 00:25:58.332 clat (msec): min=39, max=182, avg=83.67, stdev=24.38 00:25:58.332 lat (msec): min=39, max=182, avg=83.69, stdev=24.38 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 55], 20.00th=[ 62], 00:25:58.332 | 30.00th=[ 67], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 89], 00:25:58.332 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 124], 00:25:58.332 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 184], 00:25:58.332 | 99.99th=[ 184] 00:25:58.332 bw ( KiB/s): min= 512, max= 1024, per=3.60%, avg=758.70, stdev=162.99, samples=20 00:25:58.332 iops : min= 128, max= 256, avg=189.65, stdev=40.79, samples=20 00:25:58.332 lat (msec) : 50=6.32%, 100=71.11%, 250=22.57% 00:25:58.332 cpu : usr=41.56%, sys=0.69%, ctx=1267, majf=0, minf=9 00:25:58.332 IO depths : 1=2.2%, 2=5.5%, 4=16.2%, 8=65.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102709: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.3MiB/10041msec) 00:25:58.332 slat (usec): min=3, max=4020, avg=13.47, stdev=93.45 00:25:58.332 clat (msec): min=9, max=131, avg=60.95, stdev=20.78 00:25:58.332 lat (msec): min=9, max=131, avg=60.96, stdev=20.78 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 42], 00:25:58.332 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 63], 00:25:58.332 | 70.00th=[ 69], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 97], 00:25:58.332 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 123], 00:25:58.332 | 99.99th=[ 132] 00:25:58.332 bw ( KiB/s): min= 768, max= 1424, per=4.97%, avg=1046.45, stdev=202.45, samples=20 00:25:58.332 iops : min= 192, max= 356, avg=261.60, stdev=50.62, samples=20 00:25:58.332 lat (msec) : 10=0.61%, 20=0.61%, 50=36.25%, 100=59.12%, 250=3.42% 00:25:58.332 cpu : usr=48.36%, sys=0.88%, ctx=1387, majf=0, minf=9 00:25:58.332 IO depths : 1=0.4%, 2=1.0%, 4=7.0%, 8=78.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=2632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102710: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=247, BW=990KiB/s (1014kB/s)(9936KiB/10037msec) 00:25:58.332 slat (usec): min=4, max=6998, avg=22.26, stdev=248.67 00:25:58.332 clat (msec): min=10, max=135, avg=64.44, stdev=21.61 00:25:58.332 lat (msec): min=10, max=135, avg=64.47, stdev=21.62 00:25:58.332 clat percentiles (msec): 00:25:58.332 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 45], 00:25:58.332 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:25:58.332 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 101], 00:25:58.332 | 99.00th=[ 116], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:25:58.332 | 99.99th=[ 136] 00:25:58.332 bw ( KiB/s): min= 656, max= 1408, per=4.69%, avg=987.20, stdev=209.97, samples=20 00:25:58.332 iops : min= 164, max= 352, avg=246.80, stdev=52.49, samples=20 00:25:58.332 lat (msec) : 20=1.29%, 50=28.70%, 100=64.81%, 250=5.19% 00:25:58.332 cpu : usr=42.75%, sys=0.61%, ctx=1253, majf=0, minf=9 00:25:58.332 IO depths : 1=0.5%, 2=1.1%, 4=6.4%, 8=78.4%, 16=13.5%, 32=0.0%, >=64=0.0% 00:25:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 complete : 0=0.0%, 4=89.1%, 8=6.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.332 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.332 filename1: (groupid=0, jobs=1): err= 0: pid=102711: Tue Dec 3 01:00:10 2024 00:25:58.332 read: IOPS=195, BW=781KiB/s (800kB/s)(7820KiB/10015msec) 00:25:58.332 slat (usec): min=4, max=8026, avg=24.23, stdev=286.51 00:25:58.332 clat (msec): min=19, max=219, avg=81.71, stdev=27.87 00:25:58.333 lat (msec): min=19, max=219, avg=81.74, stdev=27.88 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 60], 00:25:58.333 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 84], 00:25:58.333 | 70.00th=[ 91], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 129], 00:25:58.333 | 99.00th=[ 176], 99.50th=[ 197], 99.90th=[ 220], 99.95th=[ 220], 00:25:58.333 | 99.99th=[ 220] 00:25:58.333 bw ( KiB/s): min= 512, max= 1152, per=3.72%, avg=782.63, stdev=176.18, samples=19 00:25:58.333 iops : min= 128, max= 288, avg=195.63, stdev=44.07, samples=19 00:25:58.333 lat (msec) : 20=0.77%, 50=7.11%, 100=71.82%, 250=20.31% 00:25:58.333 cpu : usr=38.04%, sys=0.64%, ctx=1025, majf=0, minf=9 00:25:58.333 IO depths : 1=2.1%, 2=4.6%, 4=13.9%, 8=68.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename1: (groupid=0, jobs=1): err= 0: pid=102712: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=192, BW=771KiB/s (790kB/s)(7728KiB/10019msec) 00:25:58.333 slat (usec): min=3, max=8029, avg=28.68, stdev=354.20 00:25:58.333 clat (msec): min=35, max=163, avg=82.79, stdev=23.27 00:25:58.333 lat (msec): min=35, max=163, avg=82.81, stdev=23.27 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 00:25:58.333 | 30.00th=[ 66], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 88], 00:25:58.333 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 122], 00:25:58.333 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:25:58.333 | 99.99th=[ 165] 00:25:58.333 bw ( KiB/s): min= 512, max= 1024, per=3.64%, avg=766.30, stdev=145.93, samples=20 00:25:58.333 iops : min= 128, max= 256, avg=191.55, stdev=36.50, samples=20 00:25:58.333 lat (msec) : 50=5.80%, 100=70.96%, 250=23.24% 00:25:58.333 cpu : usr=34.06%, sys=0.48%, ctx=938, majf=0, minf=9 00:25:58.333 IO depths : 1=2.0%, 2=4.5%, 4=14.6%, 8=67.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename2: (groupid=0, jobs=1): err= 0: pid=102713: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=241, BW=965KiB/s (988kB/s)(9712KiB/10062msec) 00:25:58.333 slat (usec): min=4, max=8017, avg=15.16, stdev=162.64 00:25:58.333 clat (msec): min=4, max=145, avg=66.15, stdev=22.11 00:25:58.333 lat (msec): min=4, max=145, avg=66.17, stdev=22.11 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:25:58.333 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:25:58.333 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 107], 00:25:58.333 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:25:58.333 | 99.99th=[ 146] 00:25:58.333 bw ( KiB/s): min= 768, max= 1280, per=4.58%, avg=964.45, stdev=144.09, samples=20 00:25:58.333 iops : min= 192, max= 320, avg=241.05, stdev=36.08, samples=20 00:25:58.333 lat (msec) : 10=1.98%, 20=0.66%, 50=20.14%, 100=70.22%, 250=7.00% 00:25:58.333 cpu : usr=38.71%, sys=0.49%, ctx=1015, majf=0, minf=9 00:25:58.333 IO depths : 1=0.9%, 2=2.0%, 4=7.9%, 8=76.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename2: (groupid=0, jobs=1): err= 0: pid=102714: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=196, BW=787KiB/s (806kB/s)(7904KiB/10038msec) 00:25:58.333 slat (usec): min=4, max=8028, avg=19.13, stdev=210.79 00:25:58.333 clat (msec): min=35, max=172, avg=81.03, stdev=24.38 00:25:58.333 lat (msec): min=35, max=172, avg=81.05, stdev=24.38 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 61], 00:25:58.333 | 30.00th=[ 65], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 85], 00:25:58.333 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 115], 95.00th=[ 126], 00:25:58.333 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 174], 99.95th=[ 174], 00:25:58.333 | 99.99th=[ 174] 00:25:58.333 bw ( KiB/s): min= 508, max= 1024, per=3.74%, avg=786.00, stdev=146.89, samples=20 00:25:58.333 iops : min= 127, max= 256, avg=196.50, stdev=36.72, samples=20 00:25:58.333 lat (msec) : 50=5.82%, 100=70.45%, 250=23.73% 00:25:58.333 cpu : usr=37.64%, sys=0.51%, ctx=1230, majf=0, minf=9 00:25:58.333 IO depths : 1=2.9%, 2=6.6%, 4=17.1%, 8=63.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=92.0%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename2: (groupid=0, jobs=1): err= 0: pid=102715: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=216, BW=867KiB/s (888kB/s)(8700KiB/10037msec) 00:25:58.333 slat (usec): min=5, max=8055, avg=23.72, stdev=297.88 00:25:58.333 clat (msec): min=23, max=154, avg=73.60, stdev=21.68 00:25:58.333 lat (msec): min=23, max=154, avg=73.62, stdev=21.67 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:25:58.333 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 75], 00:25:58.333 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 102], 95.00th=[ 116], 00:25:58.333 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:25:58.333 | 99.99th=[ 155] 00:25:58.333 bw ( KiB/s): min= 640, max= 1200, per=4.10%, avg=863.05, stdev=134.57, samples=20 00:25:58.333 iops : min= 160, max= 300, avg=215.70, stdev=33.65, samples=20 00:25:58.333 lat (msec) : 50=13.33%, 100=76.18%, 250=10.48% 00:25:58.333 cpu : usr=33.60%, sys=0.37%, ctx=936, majf=0, minf=9 00:25:58.333 IO depths : 1=2.0%, 2=4.6%, 4=13.2%, 8=69.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename2: (groupid=0, jobs=1): err= 0: pid=102716: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=218, BW=874KiB/s (895kB/s)(8780KiB/10041msec) 00:25:58.333 slat (usec): min=4, max=4644, avg=13.93, stdev=99.13 00:25:58.333 clat (msec): min=24, max=152, avg=73.04, stdev=26.69 00:25:58.333 lat (msec): min=24, max=152, avg=73.05, stdev=26.69 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:25:58.333 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 80], 00:25:58.333 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 120], 00:25:58.333 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 153], 00:25:58.333 | 99.99th=[ 153] 00:25:58.333 bw ( KiB/s): min= 512, max= 1277, per=4.14%, avg=871.50, stdev=260.04, samples=20 00:25:58.333 iops : min= 128, max= 319, avg=217.85, stdev=65.00, samples=20 00:25:58.333 lat (msec) : 50=24.19%, 100=62.92%, 250=12.89% 00:25:58.333 cpu : usr=42.87%, sys=0.64%, ctx=967, majf=0, minf=9 00:25:58.333 IO depths : 1=1.7%, 2=3.6%, 4=10.8%, 8=72.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=2195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename2: (groupid=0, jobs=1): err= 0: pid=102717: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=232, BW=930KiB/s (952kB/s)(9336KiB/10042msec) 00:25:58.333 slat (usec): min=3, max=8025, avg=17.98, stdev=234.51 00:25:58.333 clat (msec): min=24, max=155, avg=68.57, stdev=25.59 00:25:58.333 lat (msec): min=24, max=155, avg=68.59, stdev=25.58 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 47], 00:25:58.333 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 71], 00:25:58.333 | 70.00th=[ 79], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 121], 00:25:58.333 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:25:58.333 | 99.99th=[ 157] 00:25:58.333 bw ( KiB/s): min= 600, max= 1504, per=4.42%, avg=931.00, stdev=230.48, samples=20 00:25:58.333 iops : min= 150, max= 376, avg=232.70, stdev=57.65, samples=20 00:25:58.333 lat (msec) : 50=28.41%, 100=59.51%, 250=12.08% 00:25:58.333 cpu : usr=35.37%, sys=0.55%, ctx=975, majf=0, minf=9 00:25:58.333 IO depths : 1=0.5%, 2=1.1%, 4=6.0%, 8=78.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:25:58.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 complete : 0=0.0%, 4=89.3%, 8=7.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.333 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.333 filename2: (groupid=0, jobs=1): err= 0: pid=102718: Tue Dec 3 01:00:10 2024 00:25:58.333 read: IOPS=237, BW=948KiB/s (971kB/s)(9524KiB/10046msec) 00:25:58.333 slat (usec): min=3, max=6325, avg=14.05, stdev=129.60 00:25:58.333 clat (msec): min=26, max=183, avg=67.40, stdev=21.18 00:25:58.333 lat (msec): min=26, max=183, avg=67.41, stdev=21.18 00:25:58.333 clat percentiles (msec): 00:25:58.333 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:25:58.333 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 70], 00:25:58.333 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 103], 00:25:58.333 | 99.00th=[ 124], 99.50th=[ 134], 99.90th=[ 184], 99.95th=[ 184], 00:25:58.333 | 99.99th=[ 184] 00:25:58.333 bw ( KiB/s): min= 640, max= 1296, per=4.49%, avg=945.80, stdev=181.20, samples=20 00:25:58.333 iops : min= 160, max= 324, avg=236.40, stdev=45.32, samples=20 00:25:58.333 lat (msec) : 50=21.55%, 100=72.41%, 250=6.05% 00:25:58.333 cpu : usr=44.08%, sys=0.73%, ctx=1416, majf=0, minf=9 00:25:58.333 IO depths : 1=0.7%, 2=1.9%, 4=9.3%, 8=75.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:58.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.334 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.334 issued rwts: total=2381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.334 filename2: (groupid=0, jobs=1): err= 0: pid=102719: Tue Dec 3 01:00:10 2024 00:25:58.334 read: IOPS=231, BW=928KiB/s (950kB/s)(9340KiB/10065msec) 00:25:58.334 slat (usec): min=4, max=6560, avg=14.42, stdev=135.71 00:25:58.334 clat (msec): min=4, max=152, avg=68.84, stdev=22.54 00:25:58.334 lat (msec): min=5, max=152, avg=68.86, stdev=22.53 00:25:58.334 clat percentiles (msec): 00:25:58.334 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 52], 00:25:58.334 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:25:58.334 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 105], 00:25:58.334 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:25:58.334 | 99.99th=[ 153] 00:25:58.334 bw ( KiB/s): min= 592, max= 1200, per=4.41%, avg=927.05, stdev=165.97, samples=20 00:25:58.334 iops : min= 148, max= 300, avg=231.70, stdev=41.51, samples=20 00:25:58.334 lat (msec) : 10=1.28%, 20=0.77%, 50=16.87%, 100=70.96%, 250=10.11% 00:25:58.334 cpu : usr=35.87%, sys=0.40%, ctx=1132, majf=0, minf=9 00:25:58.334 IO depths : 1=0.7%, 2=1.5%, 4=7.2%, 8=77.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:58.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.334 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.334 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.334 filename2: (groupid=0, jobs=1): err= 0: pid=102720: Tue Dec 3 01:00:10 2024 00:25:58.334 read: IOPS=212, BW=850KiB/s (870kB/s)(8512KiB/10019msec) 00:25:58.334 slat (usec): min=3, max=4038, avg=14.61, stdev=87.59 00:25:58.334 clat (msec): min=24, max=165, avg=75.24, stdev=24.21 00:25:58.334 lat (msec): min=24, max=165, avg=75.25, stdev=24.21 00:25:58.334 clat percentiles (msec): 00:25:58.334 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 55], 00:25:58.334 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 83], 00:25:58.334 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 115], 00:25:58.334 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:25:58.334 | 99.99th=[ 167] 00:25:58.334 bw ( KiB/s): min= 564, max= 1328, per=4.01%, avg=844.45, stdev=208.53, samples=20 00:25:58.334 iops : min= 141, max= 332, avg=211.10, stdev=52.14, samples=20 00:25:58.334 lat (msec) : 50=17.01%, 100=67.58%, 250=15.41% 00:25:58.334 cpu : usr=39.35%, sys=0.69%, ctx=1431, majf=0, minf=9 00:25:58.334 IO depths : 1=1.5%, 2=3.4%, 4=10.9%, 8=72.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:58.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.334 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.334 issued rwts: total=2128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.334 00:25:58.334 Run status group 0 (all jobs): 00:25:58.334 READ: bw=20.5MiB/s (21.5MB/s), 764KiB/s-1049KiB/s (782kB/s-1074kB/s), io=207MiB (217MB), run=10011-10073msec 00:25:58.592 01:00:11 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:58.592 01:00:11 -- target/dif.sh@43 -- # local sub 00:25:58.592 01:00:11 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.592 01:00:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:58.592 01:00:11 -- target/dif.sh@36 -- # local sub_id=0 00:25:58.592 01:00:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.592 01:00:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:58.592 01:00:11 -- target/dif.sh@36 -- # local sub_id=1 00:25:58.592 01:00:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.592 01:00:11 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:58.592 01:00:11 -- target/dif.sh@36 -- # local sub_id=2 00:25:58.592 01:00:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:58.592 01:00:11 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:58.592 01:00:11 -- target/dif.sh@115 -- # numjobs=2 00:25:58.592 01:00:11 -- target/dif.sh@115 -- # iodepth=8 00:25:58.592 01:00:11 -- target/dif.sh@115 -- # runtime=5 00:25:58.592 01:00:11 -- target/dif.sh@115 -- # files=1 00:25:58.592 01:00:11 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:58.592 01:00:11 -- target/dif.sh@28 -- # local sub 00:25:58.592 01:00:11 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.592 01:00:11 -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.592 01:00:11 -- target/dif.sh@18 -- # local sub_id=0 00:25:58.592 01:00:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 bdev_null0 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 01:00:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.592 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.850 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.850 01:00:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.850 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.850 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.850 [2024-12-03 01:00:11.115335] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.850 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.850 01:00:11 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.850 01:00:11 -- target/dif.sh@31 -- # create_subsystem 1 00:25:58.850 01:00:11 -- target/dif.sh@18 -- # local sub_id=1 00:25:58.850 01:00:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:58.850 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.850 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.850 bdev_null1 00:25:58.850 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.850 01:00:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:58.850 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.850 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.850 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.850 01:00:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:58.850 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.850 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.850 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.850 01:00:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.850 01:00:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.850 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:58.850 01:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.850 01:00:11 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:58.850 01:00:11 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:58.850 01:00:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:58.850 01:00:11 -- nvmf/common.sh@520 -- # config=() 00:25:58.850 01:00:11 -- nvmf/common.sh@520 -- # local subsystem config 00:25:58.850 01:00:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.850 01:00:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.850 { 00:25:58.850 "params": { 00:25:58.850 "name": "Nvme$subsystem", 00:25:58.850 "trtype": "$TEST_TRANSPORT", 00:25:58.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.850 "adrfam": "ipv4", 00:25:58.850 "trsvcid": "$NVMF_PORT", 00:25:58.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.850 "hdgst": ${hdgst:-false}, 00:25:58.850 "ddgst": ${ddgst:-false} 00:25:58.850 }, 00:25:58.850 "method": "bdev_nvme_attach_controller" 00:25:58.850 } 00:25:58.850 EOF 00:25:58.850 )") 00:25:58.850 01:00:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.850 01:00:11 -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.850 01:00:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.850 01:00:11 -- target/dif.sh@54 -- # local file 00:25:58.850 01:00:11 -- target/dif.sh@56 -- # cat 00:25:58.850 01:00:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:58.850 01:00:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.850 01:00:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:58.850 01:00:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.850 01:00:11 -- nvmf/common.sh@542 -- # cat 00:25:58.850 01:00:11 -- common/autotest_common.sh@1330 -- # shift 00:25:58.850 01:00:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:58.850 01:00:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.850 01:00:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.850 01:00:11 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.850 01:00:11 -- target/dif.sh@73 -- # cat 00:25:58.850 01:00:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:58.850 01:00:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.850 01:00:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:58.850 01:00:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.850 01:00:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.850 { 00:25:58.850 "params": { 00:25:58.850 "name": "Nvme$subsystem", 00:25:58.850 "trtype": "$TEST_TRANSPORT", 00:25:58.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.850 "adrfam": "ipv4", 00:25:58.850 "trsvcid": "$NVMF_PORT", 00:25:58.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.850 "hdgst": ${hdgst:-false}, 00:25:58.850 "ddgst": ${ddgst:-false} 00:25:58.850 }, 00:25:58.850 "method": "bdev_nvme_attach_controller" 00:25:58.850 } 00:25:58.850 EOF 00:25:58.850 )") 00:25:58.850 01:00:11 -- target/dif.sh@72 -- # (( file++ )) 00:25:58.850 01:00:11 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.850 01:00:11 -- nvmf/common.sh@542 -- # cat 00:25:58.850 01:00:11 -- nvmf/common.sh@544 -- # jq . 00:25:58.850 01:00:11 -- nvmf/common.sh@545 -- # IFS=, 00:25:58.851 01:00:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:58.851 "params": { 00:25:58.851 "name": "Nvme0", 00:25:58.851 "trtype": "tcp", 00:25:58.851 "traddr": "10.0.0.2", 00:25:58.851 "adrfam": "ipv4", 00:25:58.851 "trsvcid": "4420", 00:25:58.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.851 "hdgst": false, 00:25:58.851 "ddgst": false 00:25:58.851 }, 00:25:58.851 "method": "bdev_nvme_attach_controller" 00:25:58.851 },{ 00:25:58.851 "params": { 00:25:58.851 "name": "Nvme1", 00:25:58.851 "trtype": "tcp", 00:25:58.851 "traddr": "10.0.0.2", 00:25:58.851 "adrfam": "ipv4", 00:25:58.851 "trsvcid": "4420", 00:25:58.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.851 "hdgst": false, 00:25:58.851 "ddgst": false 00:25:58.851 }, 00:25:58.851 "method": "bdev_nvme_attach_controller" 00:25:58.851 }' 00:25:58.851 01:00:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:58.851 01:00:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:58.851 01:00:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.851 01:00:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.851 01:00:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:58.851 01:00:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:58.851 01:00:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:58.851 01:00:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:58.851 01:00:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:58.851 01:00:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.851 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:58.851 ... 00:25:58.851 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:58.851 ... 00:25:58.851 fio-3.35 00:25:58.851 Starting 4 threads 00:25:59.415 [2024-12-03 01:00:11.842024] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:59.415 [2024-12-03 01:00:11.842078] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:04.677 00:26:04.677 filename0: (groupid=0, jobs=1): err= 0: pid=102856: Tue Dec 3 01:00:16 2024 00:26:04.677 read: IOPS=2269, BW=17.7MiB/s (18.6MB/s)(88.7MiB/5001msec) 00:26:04.677 slat (nsec): min=3231, max=89687, avg=17209.57, stdev=9636.36 00:26:04.677 clat (usec): min=1812, max=5541, avg=3435.09, stdev=166.35 00:26:04.677 lat (usec): min=1834, max=5549, avg=3452.30, stdev=167.11 00:26:04.677 clat percentiles (usec): 00:26:04.677 | 1.00th=[ 3064], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3326], 00:26:04.677 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:04.677 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3621], 95.00th=[ 3687], 00:26:04.677 | 99.00th=[ 3884], 99.50th=[ 3949], 99.90th=[ 4555], 99.95th=[ 5080], 00:26:04.677 | 99.99th=[ 5276] 00:26:04.677 bw ( KiB/s): min=17920, max=18304, per=24.95%, avg=18117.33, stdev=144.00, samples=9 00:26:04.677 iops : min= 2240, max= 2288, avg=2264.67, stdev=18.00, samples=9 00:26:04.677 lat (msec) : 2=0.09%, 4=99.51%, 10=0.41% 00:26:04.677 cpu : usr=95.62%, sys=3.16%, ctx=8, majf=0, minf=0 00:26:04.677 IO depths : 1=10.3%, 2=25.0%, 4=50.0%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.677 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.677 issued rwts: total=11352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.677 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:04.677 filename0: (groupid=0, jobs=1): err= 0: pid=102857: Tue Dec 3 01:00:16 2024 00:26:04.677 read: IOPS=2267, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5001msec) 00:26:04.677 slat (nsec): min=5774, max=73174, avg=11856.88, stdev=7648.29 00:26:04.677 clat (usec): min=1105, max=5705, avg=3482.70, stdev=192.09 00:26:04.677 lat (usec): min=1112, max=5729, avg=3494.56, stdev=191.76 00:26:04.677 clat percentiles (usec): 00:26:04.677 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:26:04.677 | 30.00th=[ 3425], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490], 00:26:04.677 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3687], 95.00th=[ 3785], 00:26:04.677 | 99.00th=[ 4047], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[ 5014], 00:26:04.677 | 99.99th=[ 5145] 00:26:04.677 bw ( KiB/s): min=17955, max=18256, per=24.94%, avg=18108.78, stdev=106.42, samples=9 00:26:04.677 iops : min= 2244, max= 2282, avg=2263.56, stdev=13.37, samples=9 00:26:04.677 lat (msec) : 2=0.03%, 4=98.64%, 10=1.33% 00:26:04.677 cpu : usr=94.90%, sys=3.78%, ctx=103, majf=0, minf=0 00:26:04.677 IO depths : 1=4.9%, 2=12.1%, 4=62.9%, 8=20.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.677 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.677 issued rwts: total=11339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.677 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:04.677 filename1: (groupid=0, jobs=1): err= 0: pid=102858: Tue Dec 3 01:00:16 2024 00:26:04.677 read: IOPS=2272, BW=17.8MiB/s (18.6MB/s)(88.8MiB/5002msec) 00:26:04.677 slat (nsec): min=5736, max=71073, avg=8945.98, stdev=5590.17 00:26:04.677 clat (usec): min=1077, max=5406, avg=3475.55, stdev=169.53 00:26:04.677 lat (usec): min=1084, max=5413, avg=3484.49, stdev=169.62 00:26:04.677 clat percentiles (usec): 00:26:04.677 | 1.00th=[ 3097], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:26:04.677 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3458], 60.00th=[ 3490], 00:26:04.677 | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3752], 00:26:04.677 | 99.00th=[ 3916], 99.50th=[ 4015], 99.90th=[ 4228], 99.95th=[ 4293], 00:26:04.677 | 99.99th=[ 5407] 00:26:04.677 bw ( KiB/s): min=18048, max=18304, per=24.99%, avg=18147.56, stdev=85.33, samples=9 00:26:04.677 iops : min= 2256, max= 2288, avg=2268.44, stdev=10.67, samples=9 00:26:04.677 lat (msec) : 2=0.12%, 4=99.35%, 10=0.53% 00:26:04.677 cpu : usr=94.58%, sys=4.06%, ctx=34, majf=0, minf=9 00:26:04.677 IO depths : 1=9.2%, 2=23.8%, 4=51.2%, 8=15.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.677 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.677 issued rwts: total=11368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.677 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:04.677 filename1: (groupid=0, jobs=1): err= 0: pid=102859: Tue Dec 3 01:00:16 2024 00:26:04.677 read: IOPS=2268, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5001msec) 00:26:04.677 slat (usec): min=5, max=533, avg=17.74, stdev=11.26 00:26:04.677 clat (usec): min=1683, max=5398, avg=3435.26, stdev=167.26 00:26:04.677 lat (usec): min=1706, max=5410, avg=3452.99, stdev=167.86 00:26:04.677 clat percentiles (usec): 00:26:04.677 | 1.00th=[ 3097], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3326], 00:26:04.677 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:04.677 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3621], 95.00th=[ 3687], 00:26:04.677 | 99.00th=[ 3884], 99.50th=[ 4047], 99.90th=[ 4752], 99.95th=[ 5211], 00:26:04.677 | 99.99th=[ 5211] 00:26:04.678 bw ( KiB/s): min=17920, max=18304, per=24.95%, avg=18119.11, stdev=144.69, samples=9 00:26:04.678 iops : min= 2240, max= 2288, avg=2264.89, stdev=18.09, samples=9 00:26:04.678 lat (msec) : 2=0.08%, 4=99.37%, 10=0.55% 00:26:04.678 cpu : usr=95.50%, sys=2.98%, ctx=51, majf=0, minf=0 00:26:04.678 IO depths : 1=11.5%, 2=24.9%, 4=50.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.678 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.678 issued rwts: total=11344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.678 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:04.678 00:26:04.678 Run status group 0 (all jobs): 00:26:04.678 READ: bw=70.9MiB/s (74.4MB/s), 17.7MiB/s-17.8MiB/s (18.6MB/s-18.6MB/s), io=355MiB (372MB), run=5001-5002msec 00:26:04.936 01:00:17 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:04.936 01:00:17 -- target/dif.sh@43 -- # local sub 00:26:04.936 01:00:17 -- target/dif.sh@45 -- # for sub in "$@" 00:26:04.936 01:00:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:04.936 01:00:17 -- target/dif.sh@36 -- # local sub_id=0 00:26:04.936 01:00:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:04.936 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.936 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.936 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.936 01:00:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:04.936 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.936 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.936 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.936 01:00:17 -- target/dif.sh@45 -- # for sub in "$@" 00:26:04.936 01:00:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:04.936 01:00:17 -- target/dif.sh@36 -- # local sub_id=1 00:26:04.936 01:00:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.937 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.937 01:00:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:04.937 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.937 00:26:04.937 real 0m23.645s 00:26:04.937 user 2m8.654s 00:26:04.937 sys 0m3.552s 00:26:04.937 01:00:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:04.937 ************************************ 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 END TEST fio_dif_rand_params 00:26:04.937 ************************************ 00:26:04.937 01:00:17 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:04.937 01:00:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:04.937 01:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 ************************************ 00:26:04.937 START TEST fio_dif_digest 00:26:04.937 ************************************ 00:26:04.937 01:00:17 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:04.937 01:00:17 -- target/dif.sh@123 -- # local NULL_DIF 00:26:04.937 01:00:17 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:04.937 01:00:17 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:04.937 01:00:17 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:04.937 01:00:17 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:04.937 01:00:17 -- target/dif.sh@127 -- # numjobs=3 00:26:04.937 01:00:17 -- target/dif.sh@127 -- # iodepth=3 00:26:04.937 01:00:17 -- target/dif.sh@127 -- # runtime=10 00:26:04.937 01:00:17 -- target/dif.sh@128 -- # hdgst=true 00:26:04.937 01:00:17 -- target/dif.sh@128 -- # ddgst=true 00:26:04.937 01:00:17 -- target/dif.sh@130 -- # create_subsystems 0 00:26:04.937 01:00:17 -- target/dif.sh@28 -- # local sub 00:26:04.937 01:00:17 -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.937 01:00:17 -- target/dif.sh@31 -- # create_subsystem 0 00:26:04.937 01:00:17 -- target/dif.sh@18 -- # local sub_id=0 00:26:04.937 01:00:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:04.937 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 bdev_null0 00:26:04.937 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.937 01:00:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:04.937 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.937 01:00:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:04.937 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.937 01:00:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:04.937 01:00:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.937 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:04.937 [2024-12-03 01:00:17.324070] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.937 01:00:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.937 01:00:17 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:04.937 01:00:17 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:04.937 01:00:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:04.937 01:00:17 -- nvmf/common.sh@520 -- # config=() 00:26:04.937 01:00:17 -- nvmf/common.sh@520 -- # local subsystem config 00:26:04.937 01:00:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.937 01:00:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.937 01:00:17 -- target/dif.sh@82 -- # gen_fio_conf 00:26:04.937 01:00:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.937 { 00:26:04.937 "params": { 00:26:04.937 "name": "Nvme$subsystem", 00:26:04.937 "trtype": "$TEST_TRANSPORT", 00:26:04.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.937 "adrfam": "ipv4", 00:26:04.937 "trsvcid": "$NVMF_PORT", 00:26:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.937 "hdgst": ${hdgst:-false}, 00:26:04.937 "ddgst": ${ddgst:-false} 00:26:04.937 }, 00:26:04.937 "method": "bdev_nvme_attach_controller" 00:26:04.937 } 00:26:04.937 EOF 00:26:04.937 )") 00:26:04.937 01:00:17 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.937 01:00:17 -- target/dif.sh@54 -- # local file 00:26:04.937 01:00:17 -- target/dif.sh@56 -- # cat 00:26:04.937 01:00:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:04.937 01:00:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:04.937 01:00:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:04.937 01:00:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.937 01:00:17 -- common/autotest_common.sh@1330 -- # shift 00:26:04.937 01:00:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:04.937 01:00:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.937 01:00:17 -- nvmf/common.sh@542 -- # cat 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.937 01:00:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:04.937 01:00:17 -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:04.937 01:00:17 -- nvmf/common.sh@544 -- # jq . 00:26:04.937 01:00:17 -- nvmf/common.sh@545 -- # IFS=, 00:26:04.937 01:00:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:04.937 "params": { 00:26:04.937 "name": "Nvme0", 00:26:04.937 "trtype": "tcp", 00:26:04.937 "traddr": "10.0.0.2", 00:26:04.937 "adrfam": "ipv4", 00:26:04.937 "trsvcid": "4420", 00:26:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:04.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:04.937 "hdgst": true, 00:26:04.937 "ddgst": true 00:26:04.937 }, 00:26:04.937 "method": "bdev_nvme_attach_controller" 00:26:04.937 }' 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:04.937 01:00:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:04.937 01:00:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:04.937 01:00:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:04.937 01:00:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:04.937 01:00:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:04.937 01:00:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.196 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:05.196 ... 00:26:05.196 fio-3.35 00:26:05.196 Starting 3 threads 00:26:05.454 [2024-12-03 01:00:17.894115] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:05.454 [2024-12-03 01:00:17.894190] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:17.657 00:26:17.657 filename0: (groupid=0, jobs=1): err= 0: pid=102965: Tue Dec 3 01:00:28 2024 00:26:17.657 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(333MiB/10004msec) 00:26:17.657 slat (nsec): min=5491, max=66035, avg=16761.60, stdev=5989.02 00:26:17.657 clat (usec): min=5832, max=17663, avg=11238.66, stdev=2217.22 00:26:17.657 lat (usec): min=5849, max=17681, avg=11255.42, stdev=2217.17 00:26:17.657 clat percentiles (usec): 00:26:17.657 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 9110], 00:26:17.657 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:26:17.657 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13566], 95.00th=[14091], 00:26:17.657 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16712], 99.95th=[16712], 00:26:17.657 | 99.99th=[17695] 00:26:17.657 bw ( KiB/s): min=29696, max=39936, per=34.97%, avg=34044.47, stdev=2950.88, samples=19 00:26:17.657 iops : min= 232, max= 312, avg=265.95, stdev=23.06, samples=19 00:26:17.657 lat (msec) : 10=22.70%, 20=77.30% 00:26:17.657 cpu : usr=92.68%, sys=5.29%, ctx=9, majf=0, minf=9 00:26:17.657 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.657 issued rwts: total=2665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.657 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.657 filename0: (groupid=0, jobs=1): err= 0: pid=102966: Tue Dec 3 01:00:28 2024 00:26:17.657 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(326MiB/10006msec) 00:26:17.657 slat (nsec): min=6186, max=71731, avg=14595.42, stdev=6175.34 00:26:17.657 clat (usec): min=7691, max=91743, avg=11509.81, stdev=8192.85 00:26:17.657 lat (usec): min=7701, max=91753, avg=11524.41, stdev=8192.77 00:26:17.658 clat percentiles (usec): 00:26:17.658 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 9241], 00:26:17.658 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:26:17.658 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11863], 00:26:17.658 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54264], 99.95th=[90702], 00:26:17.658 | 99.99th=[91751] 00:26:17.658 bw ( KiB/s): min=26624, max=39424, per=34.22%, avg=33306.95, stdev=3182.48, samples=19 00:26:17.658 iops : min= 208, max= 308, avg=260.21, stdev=24.86, samples=19 00:26:17.658 lat (msec) : 10=56.64%, 20=39.40%, 50=1.19%, 100=2.76% 00:26:17.658 cpu : usr=93.87%, sys=4.50%, ctx=15, majf=0, minf=9 00:26:17.658 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.658 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.658 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.658 filename0: (groupid=0, jobs=1): err= 0: pid=102967: Tue Dec 3 01:00:28 2024 00:26:17.658 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(296MiB/10044msec) 00:26:17.658 slat (usec): min=6, max=213, avg=16.13, stdev= 7.95 00:26:17.658 clat (usec): min=7038, max=50975, avg=12682.77, stdev=2522.21 00:26:17.658 lat (usec): min=7058, max=50994, avg=12698.89, stdev=2522.12 00:26:17.658 clat percentiles (usec): 00:26:17.658 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[10159], 00:26:17.658 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:26:17.658 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:26:17.658 | 99.00th=[16319], 99.50th=[16581], 99.90th=[21890], 99.95th=[46924], 00:26:17.658 | 99.99th=[51119] 00:26:17.658 bw ( KiB/s): min=26112, max=34304, per=31.13%, avg=30297.45, stdev=2217.32, samples=20 00:26:17.658 iops : min= 204, max= 268, avg=236.65, stdev=17.35, samples=20 00:26:17.658 lat (msec) : 10=19.46%, 20=80.37%, 50=0.13%, 100=0.04% 00:26:17.658 cpu : usr=93.40%, sys=4.79%, ctx=98, majf=0, minf=9 00:26:17.658 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.658 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.658 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.658 00:26:17.658 Run status group 0 (all jobs): 00:26:17.658 READ: bw=95.1MiB/s (99.7MB/s), 29.5MiB/s-33.3MiB/s (30.9MB/s-34.9MB/s), io=955MiB (1001MB), run=10004-10044msec 00:26:17.658 01:00:28 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:17.658 01:00:28 -- target/dif.sh@43 -- # local sub 00:26:17.658 01:00:28 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.658 01:00:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.658 01:00:28 -- target/dif.sh@36 -- # local sub_id=0 00:26:17.658 01:00:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.658 01:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.658 01:00:28 -- common/autotest_common.sh@10 -- # set +x 00:26:17.658 01:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.658 01:00:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.658 01:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.658 01:00:28 -- common/autotest_common.sh@10 -- # set +x 00:26:17.658 01:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.658 00:26:17.658 real 0m10.991s 00:26:17.658 user 0m28.674s 00:26:17.658 sys 0m1.731s 00:26:17.658 01:00:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:17.658 ************************************ 00:26:17.658 END TEST fio_dif_digest 00:26:17.658 ************************************ 00:26:17.658 01:00:28 -- common/autotest_common.sh@10 -- # set +x 00:26:17.658 01:00:28 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:17.658 01:00:28 -- target/dif.sh@147 -- # nvmftestfini 00:26:17.658 01:00:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:17.658 01:00:28 -- nvmf/common.sh@116 -- # sync 00:26:17.658 01:00:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:17.658 01:00:28 -- nvmf/common.sh@119 -- # set +e 00:26:17.658 01:00:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:17.658 01:00:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:17.658 rmmod nvme_tcp 00:26:17.658 rmmod nvme_fabrics 00:26:17.658 rmmod nvme_keyring 00:26:17.658 01:00:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:17.658 01:00:28 -- nvmf/common.sh@123 -- # set -e 00:26:17.658 01:00:28 -- nvmf/common.sh@124 -- # return 0 00:26:17.658 01:00:28 -- nvmf/common.sh@477 -- # '[' -n 102190 ']' 00:26:17.658 01:00:28 -- nvmf/common.sh@478 -- # killprocess 102190 00:26:17.658 01:00:28 -- common/autotest_common.sh@936 -- # '[' -z 102190 ']' 00:26:17.658 01:00:28 -- common/autotest_common.sh@940 -- # kill -0 102190 00:26:17.658 01:00:28 -- common/autotest_common.sh@941 -- # uname 00:26:17.658 01:00:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.658 01:00:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102190 00:26:17.658 01:00:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:17.658 01:00:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:17.658 killing process with pid 102190 00:26:17.658 01:00:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102190' 00:26:17.658 01:00:28 -- common/autotest_common.sh@955 -- # kill 102190 00:26:17.658 01:00:28 -- common/autotest_common.sh@960 -- # wait 102190 00:26:17.658 01:00:28 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:17.658 01:00:28 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:17.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:17.658 Waiting for block devices as requested 00:26:17.658 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:17.658 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:17.658 01:00:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:17.658 01:00:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:17.658 01:00:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.658 01:00:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:17.658 01:00:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.658 01:00:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:17.658 01:00:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.658 01:00:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:17.658 00:26:17.658 real 1m0.443s 00:26:17.658 user 3m52.547s 00:26:17.658 sys 0m14.188s 00:26:17.658 01:00:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:17.658 01:00:29 -- common/autotest_common.sh@10 -- # set +x 00:26:17.658 ************************************ 00:26:17.658 END TEST nvmf_dif 00:26:17.658 ************************************ 00:26:17.658 01:00:29 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:17.658 01:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:17.658 01:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.658 01:00:29 -- common/autotest_common.sh@10 -- # set +x 00:26:17.658 ************************************ 00:26:17.658 START TEST nvmf_abort_qd_sizes 00:26:17.658 ************************************ 00:26:17.658 01:00:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:17.658 * Looking for test storage... 00:26:17.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:17.658 01:00:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:17.658 01:00:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:17.658 01:00:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:17.658 01:00:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:17.658 01:00:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:17.658 01:00:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:17.658 01:00:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:17.658 01:00:29 -- scripts/common.sh@335 -- # IFS=.-: 00:26:17.658 01:00:29 -- scripts/common.sh@335 -- # read -ra ver1 00:26:17.658 01:00:29 -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.658 01:00:29 -- scripts/common.sh@336 -- # read -ra ver2 00:26:17.658 01:00:29 -- scripts/common.sh@337 -- # local 'op=<' 00:26:17.658 01:00:29 -- scripts/common.sh@339 -- # ver1_l=2 00:26:17.658 01:00:29 -- scripts/common.sh@340 -- # ver2_l=1 00:26:17.658 01:00:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:17.658 01:00:29 -- scripts/common.sh@343 -- # case "$op" in 00:26:17.658 01:00:29 -- scripts/common.sh@344 -- # : 1 00:26:17.658 01:00:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:17.658 01:00:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.658 01:00:29 -- scripts/common.sh@364 -- # decimal 1 00:26:17.658 01:00:29 -- scripts/common.sh@352 -- # local d=1 00:26:17.658 01:00:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.658 01:00:29 -- scripts/common.sh@354 -- # echo 1 00:26:17.658 01:00:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:17.658 01:00:29 -- scripts/common.sh@365 -- # decimal 2 00:26:17.658 01:00:29 -- scripts/common.sh@352 -- # local d=2 00:26:17.658 01:00:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.658 01:00:29 -- scripts/common.sh@354 -- # echo 2 00:26:17.658 01:00:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:17.658 01:00:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:17.659 01:00:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:17.659 01:00:29 -- scripts/common.sh@367 -- # return 0 00:26:17.659 01:00:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.659 01:00:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:17.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.659 --rc genhtml_branch_coverage=1 00:26:17.659 --rc genhtml_function_coverage=1 00:26:17.659 --rc genhtml_legend=1 00:26:17.659 --rc geninfo_all_blocks=1 00:26:17.659 --rc geninfo_unexecuted_blocks=1 00:26:17.659 00:26:17.659 ' 00:26:17.659 01:00:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:17.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.659 --rc genhtml_branch_coverage=1 00:26:17.659 --rc genhtml_function_coverage=1 00:26:17.659 --rc genhtml_legend=1 00:26:17.659 --rc geninfo_all_blocks=1 00:26:17.659 --rc geninfo_unexecuted_blocks=1 00:26:17.659 00:26:17.659 ' 00:26:17.659 01:00:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:17.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.659 --rc genhtml_branch_coverage=1 00:26:17.659 --rc genhtml_function_coverage=1 00:26:17.659 --rc genhtml_legend=1 00:26:17.659 --rc geninfo_all_blocks=1 00:26:17.659 --rc geninfo_unexecuted_blocks=1 00:26:17.659 00:26:17.659 ' 00:26:17.659 01:00:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:17.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.659 --rc genhtml_branch_coverage=1 00:26:17.659 --rc genhtml_function_coverage=1 00:26:17.659 --rc genhtml_legend=1 00:26:17.659 --rc geninfo_all_blocks=1 00:26:17.659 --rc geninfo_unexecuted_blocks=1 00:26:17.659 00:26:17.659 ' 00:26:17.659 01:00:29 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.659 01:00:29 -- nvmf/common.sh@7 -- # uname -s 00:26:17.659 01:00:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.659 01:00:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.659 01:00:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.659 01:00:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.659 01:00:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.659 01:00:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.659 01:00:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.659 01:00:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.659 01:00:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.659 01:00:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.659 01:00:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 00:26:17.659 01:00:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=15939434-fa82-47b6-ae7c-b62ba203cee8 00:26:17.659 01:00:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.659 01:00:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.659 01:00:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:17.659 01:00:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.659 01:00:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.659 01:00:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.659 01:00:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.659 01:00:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.659 01:00:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.659 01:00:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.659 01:00:29 -- paths/export.sh@5 -- # export PATH 00:26:17.659 01:00:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.659 01:00:29 -- nvmf/common.sh@46 -- # : 0 00:26:17.659 01:00:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:17.659 01:00:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:17.659 01:00:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:17.659 01:00:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.659 01:00:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.659 01:00:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:17.659 01:00:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:17.659 01:00:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:17.659 01:00:29 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:17.659 01:00:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:17.659 01:00:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.659 01:00:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:17.659 01:00:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:17.659 01:00:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:17.659 01:00:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.659 01:00:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:17.659 01:00:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.659 01:00:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:17.659 01:00:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:17.659 01:00:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:17.659 01:00:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:17.659 01:00:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:17.659 01:00:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:17.659 01:00:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.659 01:00:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.659 01:00:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:17.659 01:00:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:17.659 01:00:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:17.659 01:00:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:17.659 01:00:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:17.659 01:00:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.659 01:00:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:17.659 01:00:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:17.659 01:00:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:17.659 01:00:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:17.659 01:00:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:17.659 01:00:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:17.659 Cannot find device "nvmf_tgt_br" 00:26:17.659 01:00:29 -- nvmf/common.sh@154 -- # true 00:26:17.659 01:00:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:17.659 Cannot find device "nvmf_tgt_br2" 00:26:17.659 01:00:29 -- nvmf/common.sh@155 -- # true 00:26:17.659 01:00:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:17.659 01:00:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:17.659 Cannot find device "nvmf_tgt_br" 00:26:17.659 01:00:29 -- nvmf/common.sh@157 -- # true 00:26:17.659 01:00:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:17.659 Cannot find device "nvmf_tgt_br2" 00:26:17.659 01:00:29 -- nvmf/common.sh@158 -- # true 00:26:17.659 01:00:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:17.659 01:00:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:17.659 01:00:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:17.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:17.659 01:00:29 -- nvmf/common.sh@161 -- # true 00:26:17.659 01:00:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:17.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:17.659 01:00:29 -- nvmf/common.sh@162 -- # true 00:26:17.659 01:00:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:17.659 01:00:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:17.659 01:00:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:17.659 01:00:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:17.659 01:00:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:17.659 01:00:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:17.659 01:00:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:17.659 01:00:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:17.659 01:00:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:17.659 01:00:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:17.659 01:00:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:17.659 01:00:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:17.659 01:00:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:17.659 01:00:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:17.659 01:00:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:17.660 01:00:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:17.660 01:00:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:17.660 01:00:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:17.660 01:00:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:17.660 01:00:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:17.660 01:00:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:17.660 01:00:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:17.660 01:00:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:17.660 01:00:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:17.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:26:17.660 00:26:17.660 --- 10.0.0.2 ping statistics --- 00:26:17.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.660 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:17.660 01:00:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:17.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:17.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:26:17.660 00:26:17.660 --- 10.0.0.3 ping statistics --- 00:26:17.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.660 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:17.660 01:00:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:17.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:17.660 00:26:17.660 --- 10.0.0.1 ping statistics --- 00:26:17.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.660 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:17.660 01:00:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.660 01:00:29 -- nvmf/common.sh@421 -- # return 0 00:26:17.660 01:00:29 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:17.660 01:00:29 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:18.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:18.240 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:18.240 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:18.499 01:00:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.499 01:00:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:18.499 01:00:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:18.499 01:00:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.499 01:00:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:18.499 01:00:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:18.499 01:00:30 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:18.499 01:00:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:18.499 01:00:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:18.499 01:00:30 -- common/autotest_common.sh@10 -- # set +x 00:26:18.499 01:00:30 -- nvmf/common.sh@469 -- # nvmfpid=103558 00:26:18.499 01:00:30 -- nvmf/common.sh@470 -- # waitforlisten 103558 00:26:18.499 01:00:30 -- common/autotest_common.sh@829 -- # '[' -z 103558 ']' 00:26:18.499 01:00:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:18.499 01:00:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.499 01:00:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.499 01:00:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.499 01:00:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.499 01:00:30 -- common/autotest_common.sh@10 -- # set +x 00:26:18.499 [2024-12-03 01:00:30.858025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:18.499 [2024-12-03 01:00:30.858121] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.499 [2024-12-03 01:00:31.001258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.757 [2024-12-03 01:00:31.092591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:18.757 [2024-12-03 01:00:31.092794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.757 [2024-12-03 01:00:31.092811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.757 [2024-12-03 01:00:31.092823] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.757 [2024-12-03 01:00:31.092985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.757 [2024-12-03 01:00:31.093127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.757 [2024-12-03 01:00:31.093954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.757 [2024-12-03 01:00:31.094007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.325 01:00:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.325 01:00:31 -- common/autotest_common.sh@862 -- # return 0 00:26:19.325 01:00:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:19.325 01:00:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:19.325 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.325 01:00:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.325 01:00:31 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:19.325 01:00:31 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:19.325 01:00:31 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:19.325 01:00:31 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:19.325 01:00:31 -- scripts/common.sh@312 -- # local nvmes 00:26:19.325 01:00:31 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:19.325 01:00:31 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:19.325 01:00:31 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:19.325 01:00:31 -- scripts/common.sh@297 -- # local bdf= 00:26:19.325 01:00:31 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:19.325 01:00:31 -- scripts/common.sh@232 -- # local class 00:26:19.325 01:00:31 -- scripts/common.sh@233 -- # local subclass 00:26:19.325 01:00:31 -- scripts/common.sh@234 -- # local progif 00:26:19.325 01:00:31 -- scripts/common.sh@235 -- # printf %02x 1 00:26:19.325 01:00:31 -- scripts/common.sh@235 -- # class=01 00:26:19.325 01:00:31 -- scripts/common.sh@236 -- # printf %02x 8 00:26:19.325 01:00:31 -- scripts/common.sh@236 -- # subclass=08 00:26:19.325 01:00:31 -- scripts/common.sh@237 -- # printf %02x 2 00:26:19.325 01:00:31 -- scripts/common.sh@237 -- # progif=02 00:26:19.325 01:00:31 -- scripts/common.sh@239 -- # hash lspci 00:26:19.325 01:00:31 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:19.585 01:00:31 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:19.585 01:00:31 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:19.585 01:00:31 -- scripts/common.sh@244 -- # tr -d '"' 00:26:19.585 01:00:31 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:19.585 01:00:31 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:19.585 01:00:31 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:19.585 01:00:31 -- scripts/common.sh@15 -- # local i 00:26:19.585 01:00:31 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:19.585 01:00:31 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:19.585 01:00:31 -- scripts/common.sh@24 -- # return 0 00:26:19.585 01:00:31 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:19.585 01:00:31 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:19.585 01:00:31 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:19.585 01:00:31 -- scripts/common.sh@15 -- # local i 00:26:19.585 01:00:31 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:19.585 01:00:31 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:19.585 01:00:31 -- scripts/common.sh@24 -- # return 0 00:26:19.585 01:00:31 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:19.585 01:00:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:19.585 01:00:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:19.585 01:00:31 -- scripts/common.sh@322 -- # uname -s 00:26:19.585 01:00:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:19.585 01:00:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:19.585 01:00:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:19.585 01:00:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:19.585 01:00:31 -- scripts/common.sh@322 -- # uname -s 00:26:19.585 01:00:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:19.585 01:00:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:19.585 01:00:31 -- scripts/common.sh@327 -- # (( 2 )) 00:26:19.585 01:00:31 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:19.585 01:00:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:19.585 01:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.585 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.585 ************************************ 00:26:19.585 START TEST spdk_target_abort 00:26:19.585 ************************************ 00:26:19.585 01:00:31 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:19.585 01:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.585 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.585 spdk_targetn1 00:26:19.585 01:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.585 01:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.585 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.585 [2024-12-03 01:00:31.961759] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.585 01:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.585 01:00:31 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:19.585 01:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.585 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.586 01:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:19.586 01:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.586 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.586 01:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:19.586 01:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.586 01:00:31 -- common/autotest_common.sh@10 -- # set +x 00:26:19.586 [2024-12-03 01:00:31.989987] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.586 01:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:19.586 01:00:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:22.874 Initializing NVMe Controllers 00:26:22.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:22.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:22.874 Initialization complete. Launching workers. 00:26:22.874 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10394, failed: 0 00:26:22.874 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1128, failed to submit 9266 00:26:22.874 success 746, unsuccess 382, failed 0 00:26:22.874 01:00:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:22.874 01:00:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:26.159 Initializing NVMe Controllers 00:26:26.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:26.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:26.159 Initialization complete. Launching workers. 00:26:26.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 6023, failed: 0 00:26:26.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1244, failed to submit 4779 00:26:26.159 success 265, unsuccess 979, failed 0 00:26:26.159 01:00:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:26.159 01:00:38 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:29.447 Initializing NVMe Controllers 00:26:29.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:29.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:29.447 Initialization complete. Launching workers. 00:26:29.447 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30151, failed: 0 00:26:29.447 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2671, failed to submit 27480 00:26:29.447 success 362, unsuccess 2309, failed 0 00:26:29.447 01:00:41 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:29.447 01:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.447 01:00:41 -- common/autotest_common.sh@10 -- # set +x 00:26:29.447 01:00:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.447 01:00:41 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:29.447 01:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.447 01:00:41 -- common/autotest_common.sh@10 -- # set +x 00:26:29.705 01:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.705 01:00:42 -- target/abort_qd_sizes.sh@62 -- # killprocess 103558 00:26:29.705 01:00:42 -- common/autotest_common.sh@936 -- # '[' -z 103558 ']' 00:26:29.705 01:00:42 -- common/autotest_common.sh@940 -- # kill -0 103558 00:26:29.705 01:00:42 -- common/autotest_common.sh@941 -- # uname 00:26:29.705 01:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:29.705 01:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103558 00:26:29.963 killing process with pid 103558 00:26:29.963 01:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:29.963 01:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:29.963 01:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103558' 00:26:29.963 01:00:42 -- common/autotest_common.sh@955 -- # kill 103558 00:26:29.963 01:00:42 -- common/autotest_common.sh@960 -- # wait 103558 00:26:30.222 00:26:30.222 real 0m10.638s 00:26:30.222 user 0m43.124s 00:26:30.222 sys 0m1.786s 00:26:30.222 01:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:30.222 01:00:42 -- common/autotest_common.sh@10 -- # set +x 00:26:30.222 ************************************ 00:26:30.222 END TEST spdk_target_abort 00:26:30.222 ************************************ 00:26:30.222 01:00:42 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:30.222 01:00:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:30.222 01:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:30.222 01:00:42 -- common/autotest_common.sh@10 -- # set +x 00:26:30.222 ************************************ 00:26:30.222 START TEST kernel_target_abort 00:26:30.222 ************************************ 00:26:30.222 01:00:42 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:30.222 01:00:42 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:30.222 01:00:42 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:30.222 01:00:42 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:30.222 01:00:42 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:30.222 01:00:42 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:30.222 01:00:42 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:30.222 01:00:42 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:30.222 01:00:42 -- nvmf/common.sh@627 -- # local block nvme 00:26:30.222 01:00:42 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:30.222 01:00:42 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:30.222 01:00:42 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:30.222 01:00:42 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:30.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:30.481 Waiting for block devices as requested 00:26:30.739 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:30.739 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:30.739 01:00:43 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:30.739 01:00:43 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:30.739 01:00:43 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:30.739 01:00:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:30.739 01:00:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:30.739 No valid GPT data, bailing 00:26:30.739 01:00:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:30.739 01:00:43 -- scripts/common.sh@393 -- # pt= 00:26:30.739 01:00:43 -- scripts/common.sh@394 -- # return 1 00:26:30.739 01:00:43 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:30.739 01:00:43 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:30.739 01:00:43 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:30.739 01:00:43 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:30.739 01:00:43 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:30.739 01:00:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:30.739 No valid GPT data, bailing 00:26:30.997 01:00:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:30.997 01:00:43 -- scripts/common.sh@393 -- # pt= 00:26:30.997 01:00:43 -- scripts/common.sh@394 -- # return 1 00:26:30.997 01:00:43 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:30.997 01:00:43 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:30.997 01:00:43 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:30.997 01:00:43 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:30.997 01:00:43 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:30.997 01:00:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:30.997 No valid GPT data, bailing 00:26:30.997 01:00:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:30.997 01:00:43 -- scripts/common.sh@393 -- # pt= 00:26:30.997 01:00:43 -- scripts/common.sh@394 -- # return 1 00:26:30.997 01:00:43 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:30.997 01:00:43 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:30.997 01:00:43 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:30.997 01:00:43 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:30.997 01:00:43 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:30.997 01:00:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:30.997 No valid GPT data, bailing 00:26:30.997 01:00:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:30.997 01:00:43 -- scripts/common.sh@393 -- # pt= 00:26:30.997 01:00:43 -- scripts/common.sh@394 -- # return 1 00:26:30.997 01:00:43 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:30.997 01:00:43 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:30.997 01:00:43 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:30.997 01:00:43 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:30.997 01:00:43 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:30.997 01:00:43 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:30.997 01:00:43 -- nvmf/common.sh@654 -- # echo 1 00:26:30.997 01:00:43 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:30.997 01:00:43 -- nvmf/common.sh@656 -- # echo 1 00:26:30.997 01:00:43 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:30.997 01:00:43 -- nvmf/common.sh@663 -- # echo tcp 00:26:30.997 01:00:43 -- nvmf/common.sh@664 -- # echo 4420 00:26:30.997 01:00:43 -- nvmf/common.sh@665 -- # echo ipv4 00:26:30.997 01:00:43 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:30.997 01:00:43 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:15939434-fa82-47b6-ae7c-b62ba203cee8 --hostid=15939434-fa82-47b6-ae7c-b62ba203cee8 -a 10.0.0.1 -t tcp -s 4420 00:26:30.997 00:26:30.997 Discovery Log Number of Records 2, Generation counter 2 00:26:30.997 =====Discovery Log Entry 0====== 00:26:30.997 trtype: tcp 00:26:30.997 adrfam: ipv4 00:26:30.997 subtype: current discovery subsystem 00:26:30.997 treq: not specified, sq flow control disable supported 00:26:30.997 portid: 1 00:26:30.997 trsvcid: 4420 00:26:30.997 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:30.997 traddr: 10.0.0.1 00:26:30.997 eflags: none 00:26:30.997 sectype: none 00:26:30.997 =====Discovery Log Entry 1====== 00:26:30.997 trtype: tcp 00:26:30.997 adrfam: ipv4 00:26:30.997 subtype: nvme subsystem 00:26:30.997 treq: not specified, sq flow control disable supported 00:26:30.997 portid: 1 00:26:30.997 trsvcid: 4420 00:26:30.997 subnqn: kernel_target 00:26:30.997 traddr: 10.0.0.1 00:26:30.997 eflags: none 00:26:30.997 sectype: none 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:30.997 01:00:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:34.308 Initializing NVMe Controllers 00:26:34.308 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:34.308 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:34.308 Initialization complete. Launching workers. 00:26:34.308 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 35310, failed: 0 00:26:34.308 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 35310, failed to submit 0 00:26:34.308 success 0, unsuccess 35310, failed 0 00:26:34.308 01:00:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:34.308 01:00:46 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:37.593 Initializing NVMe Controllers 00:26:37.593 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:37.593 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:37.593 Initialization complete. Launching workers. 00:26:37.593 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 84431, failed: 0 00:26:37.593 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36099, failed to submit 48332 00:26:37.593 success 0, unsuccess 36099, failed 0 00:26:37.593 01:00:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:37.593 01:00:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:40.897 Initializing NVMe Controllers 00:26:40.897 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:40.897 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:40.897 Initialization complete. Launching workers. 00:26:40.897 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 80892, failed: 0 00:26:40.897 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20200, failed to submit 60692 00:26:40.897 success 0, unsuccess 20200, failed 0 00:26:40.897 01:00:52 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:40.897 01:00:52 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:40.897 01:00:52 -- nvmf/common.sh@677 -- # echo 0 00:26:40.897 01:00:52 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:40.897 01:00:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:40.897 01:00:52 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:40.897 01:00:52 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:40.897 01:00:52 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:40.897 01:00:53 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:40.897 00:26:40.897 real 0m10.468s 00:26:40.897 user 0m5.604s 00:26:40.897 sys 0m2.163s 00:26:40.897 01:00:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.897 01:00:53 -- common/autotest_common.sh@10 -- # set +x 00:26:40.897 ************************************ 00:26:40.897 END TEST kernel_target_abort 00:26:40.897 ************************************ 00:26:40.897 01:00:53 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:40.897 01:00:53 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:40.897 01:00:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:40.897 01:00:53 -- nvmf/common.sh@116 -- # sync 00:26:40.897 01:00:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:40.897 01:00:53 -- nvmf/common.sh@119 -- # set +e 00:26:40.897 01:00:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:40.897 01:00:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:40.897 rmmod nvme_tcp 00:26:40.897 rmmod nvme_fabrics 00:26:40.897 rmmod nvme_keyring 00:26:40.897 01:00:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:40.897 01:00:53 -- nvmf/common.sh@123 -- # set -e 00:26:40.897 01:00:53 -- nvmf/common.sh@124 -- # return 0 00:26:40.897 01:00:53 -- nvmf/common.sh@477 -- # '[' -n 103558 ']' 00:26:40.897 01:00:53 -- nvmf/common.sh@478 -- # killprocess 103558 00:26:40.897 01:00:53 -- common/autotest_common.sh@936 -- # '[' -z 103558 ']' 00:26:40.897 01:00:53 -- common/autotest_common.sh@940 -- # kill -0 103558 00:26:40.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103558) - No such process 00:26:40.897 Process with pid 103558 is not found 00:26:40.897 01:00:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103558 is not found' 00:26:40.897 01:00:53 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:40.897 01:00:53 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:41.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.465 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:41.723 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:41.723 01:00:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:41.723 01:00:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:41.723 01:00:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.723 01:00:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:41.723 01:00:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.723 01:00:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:41.723 01:00:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.723 01:00:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:41.723 00:26:41.723 real 0m24.669s 00:26:41.723 user 0m50.162s 00:26:41.723 sys 0m5.321s 00:26:41.723 01:00:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:41.723 01:00:54 -- common/autotest_common.sh@10 -- # set +x 00:26:41.723 ************************************ 00:26:41.723 END TEST nvmf_abort_qd_sizes 00:26:41.723 ************************************ 00:26:41.723 01:00:54 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:41.723 01:00:54 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:26:41.723 01:00:54 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:26:41.723 01:00:54 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:41.723 01:00:54 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:26:41.723 01:00:54 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:41.723 01:00:54 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:26:41.723 01:00:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.723 01:00:54 -- common/autotest_common.sh@10 -- # set +x 00:26:41.723 01:00:54 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:26:41.723 01:00:54 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:26:41.723 01:00:54 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:26:41.723 01:00:54 -- common/autotest_common.sh@10 -- # set +x 00:26:43.626 INFO: APP EXITING 00:26:43.626 INFO: killing all VMs 00:26:43.626 INFO: killing vhost app 00:26:43.626 INFO: EXIT DONE 00:26:44.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:44.453 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:44.453 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:45.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.279 Cleaning 00:26:45.279 Removing: /var/run/dpdk/spdk0/config 00:26:45.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:45.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:45.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:45.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:45.279 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:45.279 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:45.279 Removing: /var/run/dpdk/spdk1/config 00:26:45.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:45.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:45.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:45.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:45.280 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:45.280 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:45.280 Removing: /var/run/dpdk/spdk2/config 00:26:45.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:45.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:45.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:45.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:45.280 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:45.280 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:45.280 Removing: /var/run/dpdk/spdk3/config 00:26:45.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:45.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:45.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:45.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:45.280 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:45.280 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:45.280 Removing: /var/run/dpdk/spdk4/config 00:26:45.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:45.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:45.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:45.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:45.280 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:45.280 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:45.280 Removing: /dev/shm/nvmf_trace.0 00:26:45.280 Removing: /dev/shm/spdk_tgt_trace.pid67583 00:26:45.280 Removing: /var/run/dpdk/spdk0 00:26:45.280 Removing: /var/run/dpdk/spdk1 00:26:45.280 Removing: /var/run/dpdk/spdk2 00:26:45.280 Removing: /var/run/dpdk/spdk3 00:26:45.280 Removing: /var/run/dpdk/spdk4 00:26:45.280 Removing: /var/run/dpdk/spdk_pid100516 00:26:45.280 Removing: /var/run/dpdk/spdk_pid100717 00:26:45.280 Removing: /var/run/dpdk/spdk_pid101008 00:26:45.280 Removing: /var/run/dpdk/spdk_pid101317 00:26:45.280 Removing: /var/run/dpdk/spdk_pid101888 00:26:45.280 Removing: /var/run/dpdk/spdk_pid101893 00:26:45.280 Removing: /var/run/dpdk/spdk_pid102265 00:26:45.280 Removing: /var/run/dpdk/spdk_pid102427 00:26:45.280 Removing: /var/run/dpdk/spdk_pid102585 00:26:45.280 Removing: /var/run/dpdk/spdk_pid102682 00:26:45.280 Removing: /var/run/dpdk/spdk_pid102842 00:26:45.280 Removing: /var/run/dpdk/spdk_pid102951 00:26:45.280 Removing: /var/run/dpdk/spdk_pid103633 00:26:45.280 Removing: /var/run/dpdk/spdk_pid103667 00:26:45.280 Removing: /var/run/dpdk/spdk_pid103699 00:26:45.280 Removing: /var/run/dpdk/spdk_pid103948 00:26:45.280 Removing: /var/run/dpdk/spdk_pid103982 00:26:45.280 Removing: /var/run/dpdk/spdk_pid104017 00:26:45.280 Removing: /var/run/dpdk/spdk_pid67427 00:26:45.280 Removing: /var/run/dpdk/spdk_pid67583 00:26:45.280 Removing: /var/run/dpdk/spdk_pid67906 00:26:45.280 Removing: /var/run/dpdk/spdk_pid68175 00:26:45.280 Removing: /var/run/dpdk/spdk_pid68366 00:26:45.280 Removing: /var/run/dpdk/spdk_pid68455 00:26:45.539 Removing: /var/run/dpdk/spdk_pid68554 00:26:45.539 Removing: /var/run/dpdk/spdk_pid68656 00:26:45.539 Removing: /var/run/dpdk/spdk_pid68689 00:26:45.539 Removing: /var/run/dpdk/spdk_pid68719 00:26:45.539 Removing: /var/run/dpdk/spdk_pid68793 00:26:45.539 Removing: /var/run/dpdk/spdk_pid68892 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69523 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69582 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69651 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69679 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69758 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69786 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69865 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69893 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69950 00:26:45.539 Removing: /var/run/dpdk/spdk_pid69980 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70026 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70056 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70215 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70245 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70334 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70398 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70428 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70491 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70506 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70535 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70560 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70589 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70609 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70643 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70657 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70694 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70713 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70748 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70767 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70796 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70816 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70850 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70865 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70900 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70919 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70954 00:26:45.539 Removing: /var/run/dpdk/spdk_pid70968 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71002 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71022 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71055 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71078 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71107 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71131 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71161 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71175 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71215 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71229 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71264 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71283 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71312 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71335 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71372 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71395 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71432 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71452 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71486 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71506 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71541 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71613 00:26:45.539 Removing: /var/run/dpdk/spdk_pid71716 00:26:45.539 Removing: /var/run/dpdk/spdk_pid72160 00:26:45.539 Removing: /var/run/dpdk/spdk_pid79137 00:26:45.539 Removing: /var/run/dpdk/spdk_pid79489 00:26:45.539 Removing: /var/run/dpdk/spdk_pid81929 00:26:45.539 Removing: /var/run/dpdk/spdk_pid82322 00:26:45.539 Removing: /var/run/dpdk/spdk_pid82594 00:26:45.539 Removing: /var/run/dpdk/spdk_pid82640 00:26:45.539 Removing: /var/run/dpdk/spdk_pid82951 00:26:45.539 Removing: /var/run/dpdk/spdk_pid83002 00:26:45.539 Removing: /var/run/dpdk/spdk_pid83393 00:26:45.798 Removing: /var/run/dpdk/spdk_pid83924 00:26:45.798 Removing: /var/run/dpdk/spdk_pid84357 00:26:45.798 Removing: /var/run/dpdk/spdk_pid85329 00:26:45.798 Removing: /var/run/dpdk/spdk_pid86321 00:26:45.798 Removing: /var/run/dpdk/spdk_pid86438 00:26:45.798 Removing: /var/run/dpdk/spdk_pid86506 00:26:45.798 Removing: /var/run/dpdk/spdk_pid87992 00:26:45.798 Removing: /var/run/dpdk/spdk_pid88241 00:26:45.798 Removing: /var/run/dpdk/spdk_pid88667 00:26:45.798 Removing: /var/run/dpdk/spdk_pid88779 00:26:45.798 Removing: /var/run/dpdk/spdk_pid88926 00:26:45.798 Removing: /var/run/dpdk/spdk_pid88972 00:26:45.798 Removing: /var/run/dpdk/spdk_pid89017 00:26:45.798 Removing: /var/run/dpdk/spdk_pid89063 00:26:45.798 Removing: /var/run/dpdk/spdk_pid89225 00:26:45.798 Removing: /var/run/dpdk/spdk_pid89373 00:26:45.798 Removing: /var/run/dpdk/spdk_pid89637 00:26:45.798 Removing: /var/run/dpdk/spdk_pid89760 00:26:45.798 Removing: /var/run/dpdk/spdk_pid90175 00:26:45.798 Removing: /var/run/dpdk/spdk_pid90561 00:26:45.798 Removing: /var/run/dpdk/spdk_pid90568 00:26:45.798 Removing: /var/run/dpdk/spdk_pid92834 00:26:45.798 Removing: /var/run/dpdk/spdk_pid93145 00:26:45.798 Removing: /var/run/dpdk/spdk_pid93659 00:26:45.798 Removing: /var/run/dpdk/spdk_pid93661 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94010 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94024 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94044 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94073 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94079 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94226 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94232 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94336 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94338 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94446 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94454 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94941 00:26:45.799 Removing: /var/run/dpdk/spdk_pid94984 00:26:45.799 Removing: /var/run/dpdk/spdk_pid95141 00:26:45.799 Removing: /var/run/dpdk/spdk_pid95262 00:26:45.799 Removing: /var/run/dpdk/spdk_pid95664 00:26:45.799 Removing: /var/run/dpdk/spdk_pid95916 00:26:45.799 Removing: /var/run/dpdk/spdk_pid96419 00:26:45.799 Removing: /var/run/dpdk/spdk_pid96984 00:26:45.799 Removing: /var/run/dpdk/spdk_pid97455 00:26:45.799 Removing: /var/run/dpdk/spdk_pid97551 00:26:45.799 Removing: /var/run/dpdk/spdk_pid97622 00:26:45.799 Removing: /var/run/dpdk/spdk_pid97700 00:26:45.799 Removing: /var/run/dpdk/spdk_pid97838 00:26:45.799 Removing: /var/run/dpdk/spdk_pid97929 00:26:45.799 Removing: /var/run/dpdk/spdk_pid98019 00:26:45.799 Removing: /var/run/dpdk/spdk_pid98105 00:26:45.799 Removing: /var/run/dpdk/spdk_pid98439 00:26:45.799 Removing: /var/run/dpdk/spdk_pid99149 00:26:45.799 Clean 00:26:46.058 killing process with pid 61814 00:26:46.058 killing process with pid 61817 00:26:46.058 01:00:58 -- common/autotest_common.sh@1446 -- # return 0 00:26:46.058 01:00:58 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:26:46.058 01:00:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.058 01:00:58 -- common/autotest_common.sh@10 -- # set +x 00:26:46.058 01:00:58 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:26:46.058 01:00:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.058 01:00:58 -- common/autotest_common.sh@10 -- # set +x 00:26:46.058 01:00:58 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:46.058 01:00:58 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:46.058 01:00:58 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:46.058 01:00:58 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:26:46.058 01:00:58 -- spdk/autotest.sh@383 -- # hostname 00:26:46.058 01:00:58 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:46.317 geninfo: WARNING: invalid characters removed from testname! 00:27:08.248 01:01:19 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:10.150 01:01:22 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:12.683 01:01:24 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.586 01:01:27 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:17.119 01:01:29 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:19.021 01:01:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:21.550 01:01:33 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:21.550 01:01:33 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:21.550 01:01:33 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:21.550 01:01:33 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:21.550 01:01:33 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:21.550 01:01:33 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:21.550 01:01:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:21.550 01:01:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:21.550 01:01:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:21.550 01:01:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:21.550 01:01:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:21.550 01:01:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:21.550 01:01:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:21.550 01:01:33 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:21.550 01:01:33 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:21.550 01:01:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:21.550 01:01:33 -- scripts/common.sh@343 -- $ case "$op" in 00:27:21.550 01:01:33 -- scripts/common.sh@344 -- $ : 1 00:27:21.550 01:01:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:21.550 01:01:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.550 01:01:33 -- scripts/common.sh@364 -- $ decimal 1 00:27:21.550 01:01:33 -- scripts/common.sh@352 -- $ local d=1 00:27:21.550 01:01:33 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:21.550 01:01:33 -- scripts/common.sh@354 -- $ echo 1 00:27:21.550 01:01:33 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:21.550 01:01:33 -- scripts/common.sh@365 -- $ decimal 2 00:27:21.550 01:01:33 -- scripts/common.sh@352 -- $ local d=2 00:27:21.550 01:01:33 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:21.550 01:01:33 -- scripts/common.sh@354 -- $ echo 2 00:27:21.550 01:01:33 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:21.550 01:01:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:21.550 01:01:33 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:21.550 01:01:33 -- scripts/common.sh@367 -- $ return 0 00:27:21.550 01:01:33 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.550 01:01:33 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.550 --rc genhtml_branch_coverage=1 00:27:21.550 --rc genhtml_function_coverage=1 00:27:21.550 --rc genhtml_legend=1 00:27:21.550 --rc geninfo_all_blocks=1 00:27:21.550 --rc geninfo_unexecuted_blocks=1 00:27:21.550 00:27:21.550 ' 00:27:21.550 01:01:33 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.550 --rc genhtml_branch_coverage=1 00:27:21.550 --rc genhtml_function_coverage=1 00:27:21.550 --rc genhtml_legend=1 00:27:21.550 --rc geninfo_all_blocks=1 00:27:21.550 --rc geninfo_unexecuted_blocks=1 00:27:21.550 00:27:21.550 ' 00:27:21.550 01:01:33 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.550 --rc genhtml_branch_coverage=1 00:27:21.550 --rc genhtml_function_coverage=1 00:27:21.550 --rc genhtml_legend=1 00:27:21.550 --rc geninfo_all_blocks=1 00:27:21.550 --rc geninfo_unexecuted_blocks=1 00:27:21.550 00:27:21.550 ' 00:27:21.550 01:01:33 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.550 --rc genhtml_branch_coverage=1 00:27:21.550 --rc genhtml_function_coverage=1 00:27:21.550 --rc genhtml_legend=1 00:27:21.550 --rc geninfo_all_blocks=1 00:27:21.550 --rc geninfo_unexecuted_blocks=1 00:27:21.550 00:27:21.551 ' 00:27:21.551 01:01:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:21.551 01:01:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:21.551 01:01:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.551 01:01:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.551 01:01:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.551 01:01:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.551 01:01:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.551 01:01:33 -- paths/export.sh@5 -- $ export PATH 00:27:21.551 01:01:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.551 01:01:33 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:21.551 01:01:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:21.551 01:01:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733187693.XXXXXX 00:27:21.551 01:01:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733187693.5gSgzE 00:27:21.551 01:01:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:21.551 01:01:33 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:21.551 01:01:33 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:21.551 01:01:33 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:21.551 01:01:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:21.551 01:01:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:21.551 01:01:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:21.551 01:01:33 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:21.551 01:01:33 -- common/autotest_common.sh@10 -- $ set +x 00:27:21.551 01:01:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:21.551 01:01:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:21.551 01:01:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:21.551 01:01:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:21.551 01:01:33 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:21.551 01:01:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:21.551 01:01:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:21.551 01:01:33 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:21.551 01:01:33 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:21.551 01:01:33 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:21.551 01:01:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:21.551 + [[ -n 5961 ]] 00:27:21.551 + sudo kill 5961 00:27:21.560 [Pipeline] } 00:27:21.574 [Pipeline] // timeout 00:27:21.578 [Pipeline] } 00:27:21.591 [Pipeline] // stage 00:27:21.596 [Pipeline] } 00:27:21.609 [Pipeline] // catchError 00:27:21.617 [Pipeline] stage 00:27:21.619 [Pipeline] { (Stop VM) 00:27:21.630 [Pipeline] sh 00:27:21.934 + vagrant halt 00:27:25.248 ==> default: Halting domain... 00:27:31.824 [Pipeline] sh 00:27:32.100 + vagrant destroy -f 00:27:34.632 ==> default: Removing domain... 00:27:34.644 [Pipeline] sh 00:27:34.924 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:34.934 [Pipeline] } 00:27:34.949 [Pipeline] // stage 00:27:34.955 [Pipeline] } 00:27:34.969 [Pipeline] // dir 00:27:34.975 [Pipeline] } 00:27:34.990 [Pipeline] // wrap 00:27:34.997 [Pipeline] } 00:27:35.010 [Pipeline] // catchError 00:27:35.020 [Pipeline] stage 00:27:35.022 [Pipeline] { (Epilogue) 00:27:35.036 [Pipeline] sh 00:27:35.319 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:40.604 [Pipeline] catchError 00:27:40.606 [Pipeline] { 00:27:40.620 [Pipeline] sh 00:27:40.902 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:41.161 Artifacts sizes are good 00:27:41.170 [Pipeline] } 00:27:41.185 [Pipeline] // catchError 00:27:41.197 [Pipeline] archiveArtifacts 00:27:41.204 Archiving artifacts 00:27:41.329 [Pipeline] cleanWs 00:27:41.386 [WS-CLEANUP] Deleting project workspace... 00:27:41.386 [WS-CLEANUP] Deferred wipeout is used... 00:27:41.393 [WS-CLEANUP] done 00:27:41.394 [Pipeline] } 00:27:41.411 [Pipeline] // stage 00:27:41.416 [Pipeline] } 00:27:41.430 [Pipeline] // node 00:27:41.436 [Pipeline] End of Pipeline 00:27:41.476 Finished: SUCCESS